Professional Documents
Culture Documents
212
0-8186-8126-8/97$10.00 0 1997 IEEE
or object-oriented), but also intended application do- and t o explicitly refuse i,o compile programs which
main. Thus, FORTRAN was designed specifically for contained segments (other than the outermost driver
scientific computing and COBOL for business applica- loop) whose execution time was unbounded, or other-
tions; real-time computing can similarly benefit from wise failed to meet their timing constraints. Related
special-purpose languages. While several languages
have been designed or designated t o be used in real-
time computing, until recently they have lacked, t o a
and High-Integrity PEARL 421. 1
languages include Tomal 118 , Flex [23], RTC++ [16],
2 13
0 Compiler and environment tools for off-the-clock
profiling, monitoring, debugging, testing and
evaluating complex computer systems, both in
the development phase, and for software mainte-
nance. These would include, for instance, a logger
and various data-flow tools for displaying def-use
chains, call graphs, interfaces, and so on.
e A workload generator, a simulator, and a testbed
for determining the net profitability in apply-
ing certain types of heuristics, transformations,
V \ /
and/or policies t o applications with synthesized
realistic workloads, constraints, and dependen-
cies. These tools wiU also permit testing of
partially-developed systems.
e A run-time environment, including kernel support
for communication and dynamic scheduling, both
for symbolic execution/simulation and for actual
execution.
(p
0 Interfaces and hooks for other tools, including on- :(3)
the-clock observers and transformations t o sup-
port fault tolerance and other requirements, and
for the analyses needed to support those tools. Assignment .... ~ ~ -.
...
~
2 14
partial or complete static scheduler. Currently, we do not assume any target architec-
The run-time preprocessor (linker) translates the ture for the compilation process. Given such a plat-
intermediate code into executable code. The run-lame form, the transformed C++ code will be further com-
kernel uses the executable code and the final con- piled and linked with obher library routines, and the
straint file and consults the static schedule generated kernel will be responsiblle for invoking the generated
by the schedulability analyzer t o schedule tasks, allo- executable code.
cate resources, and manage object queues. The net-
work simulator provides the kernel with the delays due 4 The timing tool
to communications (transmissions and message queu- The timing tool is used t o provide a safe static es-
ing). Finally, the user interface component displays timate of the execution time of programs. Inputs to
some measurements, such as performance, processes the tool include the timing map of instructions ex-
missing deadlines, and average case improvement. ecuted by the target architecture, given as a table
of (instruction type, processor type, required execu-
3 The compilation process tion time) triples, As we currently assume a homo-
Inputs t o the compilation process include (1) the geneous platform, the timing map is currently a set
source code, (2) a file containing descriptions of ar- of (instruction type, required execution time) pairs.
chitectural components, describing processors, links, We also do not currently model the effects of archi-
devices, etc. (for now, we assume a homoge- tectural features such as addressing modes, memory
neous network with an arbitrary topology), includ- hierarchies, or pipelining. We hope t o consider these
ing instruction-class/time maps, network topology, in our future work.
and other interconnection details, and (3) a (possi- To resolve the execution time of calls, the timing
bly empty) file of global compile-time assertions for tool computes for each method the total time for in-
the partial evaluator. The output from the compiler structions and c d s , propagating backward from leaf
will be an intermediate code program (in C++) and methods in the call graplh, which is unwound if neces-
a timing constraints file. In addition, the compiler sary in the presence of (bounded) recursion. For re-
will construct the graphical representations described mote c d s , the tool must consider communication de-
above: a call graph (caller and callee relationship and lays that messages may anticipate due to contention.
bindings), and for each process and method, a data We assume an upper bouind on the propagation of mes-
dependence representation used by the timing tool, sages throughout the network; delays can be left as
and a control flow graph, t o be used by the analy- parameters in timing expressions, or this upper bound
sis/transformation engine, as illustrated in Section 5 . substituted.
Currently, the compiler generates use-def and def-use
chains [2] as a data dependence representation, with The timing tool calculates two types of execu-
monolithic handling of arrays, that is, a reference t o tion times: first, the w'orst-case execution time for
one entry of an array is considered as using the whole each process, to resolve references t o other methods
array. (However, since records in CRL cannot con- through calls; and second, a timing annotation on ev-
tain access/pointer fields, each field accessor is rep- ery executable statement, both simple and structured,
resented by a separate variable.) The intermediate to aid in transformation, using the timing map.
code is then subject t o transformations by the analy- Currently, the execution time of each basic block
sis/transformation engine after timing analysis. Cor- is computed by the timing tool and stored in the
respondence between generated code and control flow process or method control flow graph, t o be used in
graph is currently maintained by two pointers per ba- justifying safety and profitability of transformations,
sic block, to the starting and ending line numbers of which then update the timing information. The tim-
the translated block, respectively, which is sufficient ing tool also adds a statement t o the output interme-
for our current transformation set. diate code at the end of each basic block, which adds
Some restrictions have been imposed to facilitate the block execution time t o a local time variable; the
the compilation process. As in Pascal, use of, or refer- value of this variable is used t o propagate timing infor-
ence to, any variable or object must be preceded by an mation t o the run-time environment, as discussed in
explicit declaration of that variable or object. All pa- Section 8. The worst-case execution time of the whole
rameters of objects, methods and threads are passed method/thread is then deduced using the execution
by value, result, or value-result (with the compiler time of basic blocks and the time it spends in calls;
free to optimize if appropriate); in addition, object this is then stored with the methodlthread entry in
parameters must be explicitly specified as imported the call graph.
or exported (or neither). The compiler will match As we are generating intermediate rather than tar-
any call to a method or a thread against the inter- get code in the current implementation, we use a map
face of that method or thread. The language provides for the basic data types (classes) defined by the lan-
only static scoping and at present disallows all alias- guage. The execution time of instructions (methods
ing. This places severe restrictions on the use of array in basic classes) is based on assumed properties on the
index expressions. These restrictions, both on array architecture and operating system. Compound state-
index expressions and aliasing between array elements, ments and calls are annotated by the compiler with
will probably be relaxed in the future (particularly as their initialization time and any other constant execu-
arrays are treated as monolithic in our analyses). tion time, exclusive of the cost of the statements in the
215
body; for example, for a loop, the timing costs include schedulability analyzer may need to call the transfor-
initialization time and time for a worst-case number mation engine if it is not able to guarantee schedula-
of instances of jumping to the header and evaluating bility (branch (2) in Figure I), as clarified in the next
the loop condition. The time for the entire composite section.
statement can be derived by time-attribute combining
rules for each type of structured statement. 6 The schedulability analyzer
The output of the timing tool is a timed interme-
diate code. The transformation engine then uses that
output and the timing constraints file generated by
the compiler t o check the feasibility, safety and prof-
itability of transformations, as elaborated in the next
i
The schedulability forms of the code produced by
the analysis and transformation engine and of the
constraint file are passed to a schedulab’ ‘ty analyzer,
which may use either an exhaustive or a heuristic
analysis t o produce an assignment and a certificate
section. of schedulability. The analyzer may also report a par-
tial static schedule to be used by the run-time envi-
5 The analysis and transformation en- ronment. In addition, it may generate directives for
gine migration and cloning t o the assignment tool.
As mentioned above, the transformation engine The schedulability analyzer may also consult the
uses the data dependence graph, the call graph, and assignment tool for the feasibility and profitability of
the control flow graph generated by the compiler to de- certain transformations, as in the case of paralleliza-
tect various possible code transformations [45, 46, 511.
The timing constraints file is consulted to test the 6 U
tion and speculative execution feedback 3 in Fig-
ure I). If some of these are eit er infeasi e or un-
profitable, the schedulability analyzer will report this
safety of proposed transformations; the timing pro-
file generated by the timing tool is used to measure fact (feedback (2) in Figure I) t o the transformation
profitability. engine, requiring it t o undo the transformation. More-
over, if the analyzer cannot find a feasible schedule, it
Currently the tool supports a limited number of may request more effort to be spent on analyses and
transformations. Ultimately, the tool will support a transformations, either by focused optimization, or in
much larger number of transformation rules aimed a t the sense of [8], t o enhance the schedulability of the
improving code and/or facilitating analyses while not code.
worsening the timing of the code. The engine applies
the transformations as a sequence of steps. In each Finally, the schedulability analyzer generates certi-
step, a different transformation or kind of transforma- fied intermediate code from which the compiler back-
tion is considered. The order in which the transfor- end will generate executable code. In the current im-
mations are to be applied remains an issue to be ad- plementation, we use the C++ compiler and linker, as
dressed in future experiments. It may also be useful discussed in the following section.
to repeat some steps because of successful transfor-
mations in other steps. For example, we can re-apply 7 The linker
branch/clause transformations if a condition is elim-
inated by the conditional linker. This dependence is As mentioned, we are not considering a specific ar-
represented by the feedback arrow (1) in Figure 1. chitecture at the moment. The target code machine
implementation is a mixture of native C++ state-
The analysis/transformation component has two ef- ments for some control statement support, and a set
fects: first, it changes the code according t o the rules of C++ class objects, types, and resources and for
of the transformations applied; second, it may relax the kernel interface. Thus, the linker is simply the
some constraints or strengthen assertions, most likely C++ compiler, which compiles the certified interme-
through interaction with the developer/user. Some diate code generated by the schedulability analyzer,
transformations, such as branch/clause transforma- and links that code with kernel code, as well as with
tions, change only the timing analysis and/or final the basic C++ classes.
code, without affecting timing constraints, while oth-
ers, such as dead-code elimination, may affect both The executable code generated in this stage is ex-
code and constraints. ecuted by the run-time environment, which simulates
distributed processing of the code over a network of
Apart from per process transformations, we have processors.
developed an approach t o collect, manipulate or delete
delays across processes (see [52]), either for process
optimization, or t o provide “free time” for monitor- 8 The run-time environment
ing and debugging, context switch, or checkpointing The run-time environment consists of a kernel, a
for fault tolerance. The tool will eventually consider network simulator and a user interface.
non-functional criteria beyond timing, such as fault-
tolerance or security. 8.1 The kernel
The output from this tool is updated timed inter- While the kernel is currently physically imple-
mediate code, as well as an updated timing constraints mented as a single process, it maintains the abstrac-
file. These outputs are then used by the schedulability tion of distributed operation and can be easily split up
analyzer, as illustrated in the following section. The should our platform become physically distributed.
2 16
The kernel is executed as a continuous loop; ev- As mentioned earlier, the kernel makes the calls t o
ery iteration, it checks an event list, selects the next callee object methods. 'This causes three problems.
event, and performs the appropriate action. Events First, the kernel must remember values of out param-
include: scheduling a thread, executing a call to a eters of the call and pass them back t o the caller, both
method, sending a message t o a remote object (mak- for local calls, and for remote calls to methods of other
ing a call to a method of an object currently residing objects assigned t o different processors. The problem
on a different processor), and updating object queues. becomes still harder for remote calls that invoke other
Every entry in the event table has a time-stamp t o calls. The second problem is similar, but arises from
determine when the kernel should react t o that event, preemption. The kernel must remember the values
and every object has a queue t o serialize access t o all of local variables to corrlectly resume execution. Fi-
methods exported by that object. The order in that nally, the kernel must remember the method program
queue depends on the scheduling criteria used and the counter, in order t o determine the next statement to
arrival order of messages. A kernel replica is emulated execute after resuming execution, and to keep track
for every processor in the network. of the elapsed time. Nonetheless, these problems, and
the transformations used to resolve them, will not af-
Synchronized clocks are emulated (maintained by fect the simulated behavior of the program.
the kernel) for the entire network. Should the im-
plementation migrate to a physically-distributed sys- We start by addressing the third problem. Ev-
tem, we would recommend use of GPS or standard ery method/thread is subdivided into a set of non-
time sources for synchronization, as advocated in [14]. preemptable units (subnnethods/subthreads); every
The time is measured in abstract real-time units. All unit then runs t o completion without preemption.
events are stamped with time of occurrence. We believe strongly that robust real-time execution
is well-supported by schedulers which preempt on the
The kernel responds to an event by initiating the basis of major inter-object control transfers in the ap-
required activity; for example, by activating thread ex- plication program. For CRL (and many other lan-
ecution or initiating the execution of methods. Thus, guages), this implies that preemption should typically
calls (except some calls t o local methods or system occur at largely at calls (or returns), and the criteria
libraries) are directed as requests or as events to the we use for determining preemption points are based
kernel. The kernel actually makes the call by execut- on calls. In general, we subdivide into two meth-
ing the callee method. This implementation has an odslthreads whenever we find a call. The first part
implicit problem with the values of out parameters at ends at the call, while the second part resumes with
the conclusion of the callee, when the execution of the the following statement (that is, with the return). We
caller is resumed; moreover, there is no way in this de- will further subdivide the second part if we find an-
sign to remember the old state in case of preemption. other call, and so on. As usual with such approaches,
We address these problems later in this subsection. this scheme may be modified in some cases, as in the
presence of sequences of data-independent calls, or
Each emulated kernel replica maintains two sets of for long blocks of (typicall.y array-based) computation.
queues: object queues and processor queues. The ob- We discuss how the kernel will handle the execution
ject queue is a general priority-based queue. Access of these units later in this section.
t o an object will be serialized using its queue. All re-
quests (calls) to services (methods) provided by this To overcome the second problem, we change the
object will be added t o its queue. Every processor may scope of the declaration of local variables defined
host multiple objects. The processor queue contains within a method t o be the scope of the object (assum-
the highest priority requests from the object queues ing all recursive calls are unwound). In other words,
assigned to that processor. Every loop iteration in the local variables for any method will become part of the
kernel algorithm, the object queues of every processor object internal state. Vatriables are to be renamed,
are checked. If there are any calls (requests) still pend- e.g., by using method name as a prefix, so that no two
ing, one will be scheduled t o run. The selection of the methods assign a common name incompatibly. Thus,
method t o be executed will be based on some real-time in the case of local calls, the kernel does not have t o
scheduling criterion, where for simplicity we currently worry about out parameters, as every variable (includ-
use Earliest Deadline First. However, any scheduling ing the parameters) are part of the object state and
discipline can be used. The kernel executes the code can be seen by other methods in the object; the com-
of that methodlthread, which may generate a new set piler restricts accesses to those legal under the original
of events. The kernel marks the new events with the semantics. This will also hold for those submethods
correct time-stamp and add them to the event table. generated by inserting preemption points, as just dis-
On completion of the task, the kernel is updated t o cussed. Fzgure 2 shows the change in code due to in-
reflect the time spent in execution (through use of sertion of preemption and changing the scopes of local
the time-increment statements at the end of execu- declarations.
tion blocks); in principle, we could instead use the For external calls, the solution is quite different, as
timing table for a static worst-case estimate of time if the caller and callee do not share state. We instead
communication is costly. In either case, the updated use a store-and-forward mechanism, as in [41], to re-
time is used to stamp events produced by the exe- member the parameters of the previous call.'
cuted method/thread. Message events are channeled
through the network simulator (see subsection 8.2). 'Actually, the CRL situation is easier to address than that
2 17
ORIGIBAL TRABSFORHED ORIGIBAL TRANSFORMED
2 18
int update-status-l(System-Stack *sp ) 1
map, and determines the: source processor and the tar-
long time = 0;
get processor. Using the topology description, it then
cin >> x ;
finds the appropriate route along which t o transfer the
time += 3 ;
request.
cin >> y ; There is a message queue in every node maintained
time += 3 ; by the network simulator. If a message is t o be trans-
cin >> theta ; ferred on a busy link, it will be queued until the
time += 3 ; link is free. The transmission rate will be depen-
// CALL t o vel.get(theta,speed) dent on the medium and the distance the message
sp->Param-Stack.pushPointer((void*) &theta); has to travel. The simulator consults some internal
sp->Param-Stack.pushPointer((void*) &speed);
table (data sheet) t o calculate the transmission time
sp->Obj-Stack.pushPointer((void*) this);
over that line. The kernel will not block waiting for
sp->Obj-Stack.pushPointer((void*) &update-status-2);
sp->Obj-Stack.pushPointer((void*) &vel);
the results of that request. The results of that call
sp->Obj-Stack.pushPointer((void*) &vel.get-l);
are reported back using the same message format but
sp->Param-Stack.pushLong(time); the previous target object becomes a source for the
store-fornard(self.id, "navigation.update-status-2", return, and conversely. The total communication de-
vel.id, "vel.get-l", sp->no); lay time is the sum of the transmission times and the
return(1); communication queuing time (forward for the request
> and backward for the results). The total service time
for the kernel request is the sum of the communication
delay and the execution time of the specified method
Figure 4: An example of the final code t o be linked within the target object, plus the object queuing de-
with the kernel. lay, as illustrated in the previous section.
There is no interaction between the network simu-
lator and the user interface in the current implementa-
tice, some of these parameterized expressions depend tion. All results and status reported to the user come
only on compile-time knowable information such as only from the kernel. In the future, we may provide
operand list length, or iteration and time constraint a graph to show the current status of the network,
requirements, and are thus easily resolved and spe- including communication queues and bottlenecks. In
cialized into constants statically at link or elaboration the next section, we describe the user interface sub-
time. However, timing expressions may also depend component of the run-time environment.
on the distribution of operands and objects and pro-
cesses across the network (this is relevant in calls), and 8.3 User interface
on the usage of shared resources.
In the current implementation, the user interface
Once again, the reader is asked to note that while is used only t o display imeasurements and statistics
our physical implementation is single process, the ker- on the applicability of transformations and their ef-
nel fully supports distribution in the application. Nat- fects on performance, deadlines, and processor utiliza-
urally, there would be some differences in the ac- tion. Development of a graphical interface is work in
tual implementation, but these would be rather mun- progress. It eventually aril1 be possible t o draw exe-
dane and well-understood. For instance, in a true cution progress figures, providing the user with infor-
distributed implementation we would need to extend mation on what every processor is doing. Moreover,
store-and-forward elements with full-fledged stubs, the measurements and statistics mentioned above will
which would (un)marshal and convert call- and return- also be presented using giraphs. In the future, we may
parameters - a problem overcome in the late Eighties- extend these capabilities t,o include a facility for affect-
early Nineties. ing the execution behavior and for providing run-time
assertions.
8.2 The network architecture simulation
tool 9 Status
The network simulation tool provides the timing The CRL support environment is currently un-
delay that thread execution anticipates due to dis- der development in the Real-Time Computing Lab
tributed allocation of objects. The simulator uses ar- at NJIT. The compiler, linker and the runtime are
chitectural information including a description of the largely operational, though we are in the process of
network topology, various distances between nodes, providing a generalized symbol table and general tim-
and the transmission medium, as provided in archi- ing constraint support. A number of transformations,
tecture description file. such as speculative execution, have been supported
Initially, the simulator reads an assignment file gen- and work is on the way to support more. Basic timing
erated by the assignment tool, providing a mapping and schedulability analysis tools are in place. As al-
for every object to a processor. Interaction with the ready stated, the work on the assignment tool for CRL
kernel is in the form of requests providing the source is in its early stages, though there are other assignment
object and the target object as well as the size of the and allocation tools in operation (developed for other
message t o be sent. The simulator consults the object Lab projects). The tools can be demonstrated, with
2 19
care, t o interested parties. We anticipate being able [I21 W . Halang, “On Methods for Direct Memory Access With-
t o distribute the tools sometime in 1998 or earlier. out Cycle Stealing,” Microprocessing and Microprogram-
ming, 17, 5 , May 1986.
10 Acknowledgements [13] W. Halang, “Lmplications on Suitable Multiprocessor
We would like to thank Ananth Ganesh, Robert Structures a n d Virtual Storage Management when Ap-
Kates, Jeff Venetta and many cis611 (Real-Time Sys- plying a Feasible Scheduling Algorithm, in Hard Real-
tems) students for their contributions t o the CRL en- Time Environments,” Software - Practice a n d Experience,
vironment and tools. We are indebted t o the Office 16(8), 761-769, 1986.
of Naval Research and, recently, the National Science
Foundation for providing generous support for this [14] W. Halang and A. Stoyenko, Constructing Pre-
project. All members of the Real-Time Computing dictable Real- Time Systems, Kluwer Academic Publishers,
Dordrecht-Hingham, 1991.
Lab at NJIT have contributed to productive and fruit-
ful discussions, related to the continuous research on [15] M.Harmon, T. Baker, D. Whalley, “A Retargetable Tech-
this project. Last but not least we thank the four nique for Predicting Execution Time,” Proceedings of the
anonymous ICECCS’97 referees for their most useful IEEE Real- Time Systems Symposizlm, IEEE (December
comments. 1992).
[3] T. Baker and G. Scallon, “An Architecture for Real- [18] R. Kieburtz, J . Hennessy, “TOMAL - A High-Level Pro-
time Software Systems,” IEEE Software, May 1986, 50- gramming Language for Microprocessor Process Control
59; reprinted in tutorial Hard Real- Time Systems, IEEE Applications,” A C M SIGPLAN Notices, Vol. 11, No. 4,
Press (1988). April 1976, pp. 127-134.
[4] T. Baker and A. Shaw, ILTheCyclic Executive Model and [19] E. Kligerman and A. Stoyenko, “Real-Time Euclid: A
Ada,” The Real-Time Systems Joumal 1,i (June 1989) Language for Reliable Real-Time Systems,” IEEE Trans-
7-26. actions u n Su.ftwa,re Engz,rceering, Vol. SE-12, No. 9, pp.
940-949, September 1986.
[5] R.Chapman, A. Wellings, A. Burns, “IntegratedProgram
Proof and Worst-case Timing Analysis of SPARK Ada,” [20] D. Leinbaugh, “Guaranteed Response Times in a Hard-
In Proceedings oJ t h e Wo,rSshop o n Language, Compiler, Real-Time Environment,” IEEE Transactio~ns v u SuJt-
a n d Tool Support for Real-Time Systems, June 1994. ware Engineering, SE-6 (1):85-91, January 1980.
[6] G.Chroust, “OrthogonalExtensions i n Microprogrammed [ Z l ] D. Leinbaugh and M.-Yamini, “Guaranteed Response
Multiprocessor Systems - A Chance for Increased Times in a Distributed Hard-Real-Time Environment,” In
Firmware Usage,” EUROMICRO Journal, Vol. 6, No. 2, Proceedings of the IEEE 1982 Real-Time Systems Sympo-
pp. 104-110,1980. sium, 157-169, December 1982.
[7] T. Chung, “CHaRTS: Compiler for Hard Real-time Sys- [22] D. Leinbaugh and M.-Yamini, “Guaranteed Response
tems,” PbD Th,tsin Proposal, Puxdue University, April Times in a Distributed Hard-Real-Time Environment,”
1994. I E E E Transnctions on Software Engineeriag, SE-12
(12):1139-1144, December 1986.
[8] R. Gerber and S. Hong, “Compiling Real-Time Programs
with Timing Constraints Refinement and Structural Code
[23] K.-Lin, S. Natarajan, “Expressing and Maintaining Tim-
Motion,” IEEE Transactions on Software Engineering,
ing Constraints in FLEX,” Proceedings of the IEEE 1988
Vol. 21, No. 5, May 1995.
Real- Time Systems Symposium, December 1988.
[9] R. G u p t a , M. Spezialetti, “Busy-Idle Profiles and Com-
pact Task Graphs: Compile-Time Support for Interleaved [24] A. Mok,P.Amerasinghe, M. Chen, K. Tantisirivat, “Eval-
uating Tight Execution Time Bounds of Progrnms by An-
and Overlapped Scheduling of Real-Time Tasks,” Univer-
sity of Pittsburgh Technical Report TR-94-24, April 1994. notations,” IEEE Workshop on Real- Time Operating Sys-
tems a n d Software, Pittsburgh, PA, pp. 74-80,May 1989.
[IO] V. Haase, “Real Time Behavior of Programs,” IEEE
Transactions on Software Engineering, SE-5 (7):494-501, [25] D. Niehaus, “Program Representation and Translation for
September 1981. Predictable Real-Time Systems,” TEEE Real- Time Sys-
tems Symposium, pp. 53-63, San Antonio, Texas, Decem-
[11] W . Halang, “A Proposal for Extensions of PEARL ber 1991.
t o Facilitate Formulation of H s r d Real-Time Applica-
tions,” Informatib-Fa,ch.berichte 8 6 , 573-582, Springer- [26] K. Nilsen, Issues in the Design and Implementation of
Verlag, September 1984. Real-Time Java, Iowa State University, Ames, Iowa, 1995.
220
[27] V. Nirkhe, W. Pugh, “A Partial Evaluator for t h e Maruti [44] A. Stoyenko, T. Marlowe, “Schedulability, Program Trans-
Hard Real-Time System,” Real-Time Systems, Vol. 5, No. formations and Real-Time Programming,” IEEE/IFA C
1,pp. 13-30, March 1993. Real-Time Operating Systems Workshop, May 1991, At-
lanta, Georgia.
[28] C. Park, “Predicting Program Execution Times by An-
alyzing Static and Dynamic Program Paths,” Real-Time [45] A. Stoyenko and T. Marllowe, “Polynomial-Time Transfor-
Systems, Vol. 5 , No. 1, pp. 31-62, March 1993. mations and Schedulability Analysis of Parallel Real-Time
Programs with Restricted Resource Contention,“ Journal
[29] C. Park, A. Shaw, “Experiments with a Program Tim- of Real-Time Systems, Vol 4, No. 4, pp. 307-329, Novem-
ing Tool Based o n a Source-Level Timing Schema,” I E E E ber 1992.
Real- Time Systems Symposium, Orlando, FL, December
1990. [46] A. Stoyenko, T. Marlowe, W. Halang and M. Younis,
“Enabling Efficient Schc:dulability Analysis through Con-
[30] P. Puschner, C. Koza, “Calculating the Maximum Execu- ditional Linking and Program Transformations,” Contrul
tion Time of Real-Time Programs,” International Journal Engineering Practice, Vol 1, No. 1, pp. 85-105, January
of Time- Critical Computing Systems, Volume 1, Number 1993.
2, pp. 159-176, September 1989.
[47] A. Stoyenko, T. Marlowe and M. Younis, “A Language
[31] R a t e Monotonic Analysis for Real-Time Systems Project, for Complex Real-Time Systems,” The Computer Journal,
Handbook o f Real- Time Systems Analysis, Software Engi- Vol. 38, No. 4, pp. 319-338, November 1995.
neering Institute, Camegie-Mellon University, May 1992.
[48] T. Tempelmeier, “A Sqpplementary Processor for Oper-
[32] K . Schleisiek-Kem, Private Communication, DELTA t ,
ating System Functions,” 1979 IFAC/IFIP Workshop on
Hamburg, 1990. Real- Time Programming, Smolenice, 18-20 June 1979.
[33] G. Schrott, Ein Zuteilungsmodell juer Multiprozessor-
[49] T. Tempelmeier, “Opereting System Processors in Real-
Echtzeitsysteme, P h D Thesis, Technical University, Mu-
nich 1986.
-
Time Systems Performance Analysis and Measurement”,
Computer Performance, Vol. 5 , No. 2, 121-127, J u n e 1984.
[34] L. Sha, and J. Goodenough, “Real-Time Scheduling The-
[50] United States Department of Defense, Ada Joint Program
ory and Ada,” Computer P3,4, I E E E (April 1990) 53-62.
Office. Reference M a n a d for the Ada Programming Lan-
[35] M. Shaw, “A Formal System for Specifying, Verifying Pro- guag e ANSI/=-STD- 1815A- 1983 (February 1983).
gram Performance,” Camegie-Mellon University, Com-
[51] M. Younis, T. Marlowe and A. Stoyenko, “Compiler
puter Science Department, Technical Report CMU-CS-79-
Transformations for Speculative Execution in a Real-
129, June 1979.
Time System,” Proceedings of the 15th Real-Time Sys-
[36] A. Shaw, “Reasoning About Time in Higher-Level Lan- tems Symposium, San J u ~ a nPuerto
, Rico, December 1994.
guage Software,” IEEE Transactions on Software Engi-
neering, pp. 875-889, SE-15,No. 7, July 1989. [52] M. Younis, T. Marlowe, G. Tsai, A. Stoyenko, “Apply-
ing Compiler Optimization in Distributed Real-Time Sys-
[37] A. Shaw, “Deterministic Timing Schemata for Paral- tems,” Technical Report CIS-95-15, Department of Com-
lel Programs,” University of Washington, Department of puter and Information Qcience, New Jersey Institute of
Computer Science and Engineering, Technical Report 89- Technology, 1995.
05-06, May 1990.
[53] K . Zuse, Foreword t o Wcolfgang A. Halang, Alexander D.
[38] A. Stoyenko, Turing goes Real-Time ..., Intemal Program- Stoyenko, Constrvcting Predictable Real-Time Systems,
ming Languages Report, Department of Computer Sci- Kluwer Academic Publishers, Dordrecht-Hingham, 1991.
ence, University of Toronto, May 1984.
22 1