You are on page 1of 38


Real-Time Operating Systems (by Damir Isovic)

Summary: The previous chapter provided a general introduction to real-time systems. This chapter discusses the operating system level supports essential for the realization of real-time applications. There exist a multitude of real-time kernels1 and they provide varying levels of support with regard to real-time systems realization. We provide an overview of commercially available RTOSs as well dwell upon some of the academic research initiatives. The goal of this chapter is provide the student with awareness of the characteristics of RTOSs and available alternatives.


Learning objectives of this chapter

After reading this chapter you should be able to: • Understand basic concepts in operating systems, such as communication, synchronization, interrupts, I/O, memory management, time-management etc., with special emphasis on their real-time implications. • Obtain a thorough knowledge regarding classes of operating systems and features commonly supported. • Get an overall perspective of the various commercial operating systems and academic research kernels and how they realize/implement real-time properties. • Understand the important issues to consider when choosing a real-time operating system for your development project (especially the role of application characteristics in this selection).


Concurrent task execution

In Chapter 1, we said that one of the main characteristics of real-time systems is multitasking, where several tasks compete for execution on a same CPU. Note that this is simulated parallelism – tasks are not really executed in parallel but the operating system creates an impression of parallelism by switching the execution of tasks very often and fast. This is different from true parallelism where several processing units are used simultaneously, e.g. as in multiprocessor systems. Simulated parallelism does not come without problems; we will look into some of them. Let us consider again the example with the electrical motor from Chapter 1, see Figure 3. The application consist of an electrical motor with a sensor and an actuator, user input throttle and

 We will use the terms Operating System (OS) and kernel interchangeably, even though an OS typically  consists of its central parts ‐ the kernel ‐ and additional services. Due to the simplicity of many Real‐Time OSs,  they do not provide any additional services, such as a file system. 

a computer system (controller). The question now is how we should design the controller software. The first question we can ask ourselves is what the controller software should do. The answer is that it calculates the control values to be sent to the motor based on sensor inputs, as described in Figure 1. Hence, the software needs to periodically read the values from the sensor and the user, compute the new control values, and actuate the motor.

read sensor Controller (program) read joystick
Figure 1: Electric engine software interaction. 

write actuator

So, how should we structure the software? Let’s try first with a simple, naive approach, which is to put all functionality in a single loop that repeats itself periodically, as described below.

void main (void){ /* declare all variables */ ... /* repeat continuously */ while(1) { sensor_val = read_sensor(); user_val = read_user(); control(sensor_val,user_val,&signal); write_actuator(signal); } }

/* read sensor */ /* read user input */ /* compute new value*/ /* actuate motor*/

Can you see any problems with this solution? One big problem is blocking in execution. For example, if the function read_sensor() is blocking, i.e., it does not return any result until a new sensor value has been generated, then the whole system will be blocked – the system is busy waiting. It gets worse if the function read_user() is blocking, since it waits for even more rare events generated by the user. We see that the functions in the example above are

blocking each other, despite the fact that they are independent from each other. The lack of user input should not affect reading of the engine sensor and vice versa. We can avoid the blocking problem above by checking in the main loop if the values are ready to be read, by checking the status registers of the input ports. For simplicity we can assume we have input ports that automatically reset their status bits when the corresponding data registers are read – a rather common case in practice.

... while(1) { if(SENSOR_VALUE_READY){ sensor_val = read_sensor(); control(sensor_val,user_val,&signal); write_actuator(signal); } if(USER_VALUE_READY){ user_val = read_user(); } } }


This solution is better than the first one, since no blocking will occur. However, it is not resource-efficient, because the while-loop will run all the time, i.e., it will consume all CPU time regardless how frequently the sensor values are generated – if there is a sensor value available the corresponding action will be performed, if there is no value, the loop will proceed right away. This is a big CPU time wastage; we should be able to execute other tasks in the system when there are no sensor values ready at the input ports. We could avoid unnecessary execution by checking first if any interrupts has been generated:

... while(1) { wait_for_system_IRQ(); /* stop execution until next interrupt */ if(SENSOR_VALUE_READY){ ... } if(USER_VALUE_READY){... } }


process it. user time … sensor   Figure 2: Lost sensor values. To prevent losing any sensor values. we need to do changes in the program so that the processing of user input will be stopped whenever a new sensor value . that’s why it is still a commonly used solution in simple embedded systems in the industry. while the processing time of the user values is much larger? In this case. it must be redone for each single change in the system.This solution is called cyclic executive. However. For example. and then re-invoke the execution of the user task as shown in Figure 3. Preempt execution of user task and run sensor task … user sensor time …   Figure 3: Manual interleaving of program execution. the biggest disadvantage of cyclic executive is that it does not consider the execution time of the independent computations in the loop. several sensor values will be generated while the computer responds to the user input – sensor values will be lost! This whole situation is depicted in Figure 2. i.  In other words. this can be controlled by loop counters. one disadvantage is that the execution schedule is “handmade”. If we do not want all functions to run at each loop iteration.. we need to preempt the execution of the user task each time a new sensor value has been generated. we need to do manual interleaving of execution.e. i. Cyclic executive is simple and deterministic. On the other hand.e. The main idea is to put all functions in a sequence that executes in a joint loop with certain periodicity.. what happens if the sensor values are much more frequent than the user values.

such as interleaving. which means that other tasks in the system can use the CPU. the OS will continue the execution of User_Task at the exact point it was interrupted. writing in the code where the preemption should occur – it usually result in many inserted if-cases in the program that check if it is time to preempt the current execution and let somebody else execute. i. control(sensor_val. sleep(…). When Sensor_Task is done. as an application programmer. we need to structure our application as separate tasks. resource allocation. the operating system will preempt it. Generally. one that takes care of the sensor values and calculate new control values. This means that the application programmer does not have to insert any special code to make task switching happen. i.. So. We will now start looking at a special type of operating systems that is suitable for handling real-time tasks.. we will use only the term task in the future to avoid confusion. Instead. and one that check the user input.&signal).is generated. when done.e. and start executing Sensor_Task. and implement them as real-time threads described below: void Sensor_Task(){ … while(1) { sensor_val = read_sensor().e. scheduling etc. } }  }     When activated by the operating system. If a sensor value is generated while the user input is processed (User_Task executes). we would like to split the application into independent tasks in the system. they will call a sleep function that waits until it is time to repeat the computation.                                                               We said in Chapter 1 that a thread is an implementation of a task. deadlines and periods) and let the operating system take care of all multitasking issues. } void User_Task(){ … while(1) { user_val = read_sensor(). the programmer must be prepared that switching might occur at any time. write_actuator(signal). both tasks will performs their actions and.g.. On the other hand. it is not easy to do code interleaving by hand. the execution of the task is suspended for a specified time interval. real-time operating systems. we would create two tasks. sleep(…). interleaving is automatic.  2 . and that they are usually used as  synonymous. or take specific action to save the local context when switching occurs. implement them as separate threads2. and let the operating system (OS) take care of interleaving. assign appropriate timing constraints to them (e.user_val. By calling sleep. Hence. In our electrical motor example.

Figure 4 illustrates a simplified model of a RTOS used to provide services to a real-time application. not the application or RTOS code. analog/digital convertors.  2. but it is better to use a HAL. We can say that a RTOS is a platform suitable for development of real-time applications.2.. The most important service in this category is scheduling of tasks.. Special functions available in a RTOS simplify and make it more efficient to develop software for real-time systems. since it will make the application easier to port to different platforms – we only need to change the HAL when moving applications to other platforms.g. the real-time application uses the services provided by the RTOS. We will talk about different real-time scheduling policies in Chapter 3. Application software RTOS Hardware Adaptation Layer Hardware   Figure 4: A Real‐Time Operating System in its environment. but an RTOS make the work much easier. Hardware Adaptation Layer (HAL) contains hardware dependent code needed to communicate with the underlying hardware. registers. i. We could see in previous examples that if we need to develop an application consisting of several independent computations. the CPU itself with I/O devices.e.. Finally. At the bottom of the figure there is the physical hardware.e. communication and synchronization. which will make the task execute in a very timely and responsive fashion. mainly due to the following properties: Task management – By using an RTOS. .3 What is a Real-Time Operating System? A real-time operating system (RTOS) is an operating system capable to guarantees certain functionality within specified time boundaries. register handling code. The RTOS itself uses the functionality provided by HAL. The RTOS can also communicate directly with the hardware. memory. scheduling. the development of real-time application software becomes easier and more efficient. device drivers. Those services are usually called system calls. it is a good idea to split the application into independent tasks and let the operating system manage their execution. for e. Services in this category include the ability to launch tasks and assign priorities to them. interrupt handling code etc. i. communication circuits. etc.4 RTOS characteristics It is certainly possible to implement real-time applications without using a real-time operating system.

without danger of that data to be damaged. standards are covering all the possible interaction and interchanges between subsystems. good time services are essential to real-time applications. Communication and synchronization – These services make it possible for tasks to pass data from one to another. we use existing reliable proven building blocks. this reduces the development effort and risk. which makes it easier to understand and maintain the application code. which substantially increase the quality of the product. By using an RTOS. 2. .5 RTOS vs GPOS What is the difference between an RTOS and a general-purpose operating system. Furthermore. tasks might well communicate corrupted data or otherwise interfere with each other. etc. such as Windows or Linux? Many non-real-time operating systems also provide similar kernel services as an RTOS. we can say that using an RTOS makes it much easier to implement and maintain real-time applications compared with doing everything from scratch. They also make it possible for tasks to coordinate. memory. using an RTOS makes it simpler for the customer to change the hardware platform. Deterministic means that provided OS services consume only known and expected amounts of time. an RTOS should be used whenever possible when developing real-time systems. Their services can inject random delays into application software and thus cause slow responsiveness of an application at unexpected times. disks. In general. Portability – an RTOS simplifies the porting between different platforms. This includes services for management of I/O devices. since a homogenous programming model usually implies usage of a streamlined set of tools and methods to get a quality product into production as quickly as possible. Homogeneous programming model – an RTOS provides a number of well defined system calls. Since most of the available RTOSs can be adjusted by the manufacturer to support different platforms. Besides. General-purpose operating systems are often quite non-deterministic. such as task delays and time-outs. Since many embedded systems have stringent timing requirements. Hence.Resource management – an RTOS provides a uniform framework for organizing and accessing the hardware devices that are typical of an embedded real-time system. so that they can productively cooperate. components and building blocks. Without the help of these RTOS services.time operating systems. Time services – Obviously. so why use an RTOS? The key difference between general-purpose operating systems (GPOS) and real-time operating systems is the need for deterministic timing behavior in real. most RTOS kernels also provide some basic timer services.

e. Another thing that differs is the clock resolution. that: • • • • Service calls must be predictable with a known upper bound on execution time. Task execution switching has to be done by some algorithm that can be analyzed for its timing. Even real-time operating systems are divided into these two types. In general. the fundamental difference between an RTOS and a GPOS is the view on result. The interval of time with which the timer is programmed to generate interrupts defines the unit of time in the system (time resolution). which is higher in an RTOS than in a GPOS.Hence. The unit of time in the system is called a system clock tick. each task is assigned a priority relative to other tasks in the system.6 Types of RTOSs We said before that one classification of real-time system is into event-triggered and timetriggered. if there are several tasks competing to execute on a single CPU. i. a timer circuit is programmed to interrupt the processor at a fixed rate. To generate a time reference. The value to be assigned to the tick depends on the specific application.. i. which is reset at system initialization and is incremented at each timer interrupt. see Figure 5. or at predefined points in time. which means. the task with the highest priority will be executed first.e. the clock resolution is an order of magnitude higher.e. High priority values represent the need for quicker responsiveness. tens of milliseconds. In an event-triggered RTOS. based on if the system activities are carried out as they come. i. Typical values used for the time resolution in an RTOS are on a millisecond level or less. Clock resolution System clock time 0 1 2 3 4 Clock ticks   Figure 5: System clock. The maximum time that interrupts can be disabled must be known. among other things.. the more CPU time is needed for the time administration. while in a GPOS. Each operating system has a system clock used for scheduling of activities. Priorities can be assigned before run-time of the system (static priority . On the other hand. the temporal aspect is very important in an RTOS. The internal system time is represented by an integer variable. Delay spent in waiting for shared resources must be possible to determine. small values of the tick improve system responsiveness and allow handling of periodic activities with the higher activation rates.. 2. a very small tick causes a large run-time overhead due to the timer handling routine: the smaller the tick value.

That is because of shared resources.assignment). If a low priority task is currently using a resource that is requested by a high priority task. The schedule is repeated over and over. We will not focus on a specific RTOS. Every day the bus drives according to the schedule. A difficulty with those hybrid systems is the communication delays between the two parts as well as the CPU sharing between event-triggered and time-triggered tasks. the high priority task will need to wait until the resource becomes free. by sending messages during execution. Although we use priorities to determine the order of task execution. such as shared variables or shared I/O devices. at run-time. which makes it possible for the people to know when the bus is arriving at a certain bus stop. the run-time mechanism is quite simple – it just reads the schedule and executes the task according to it. A comparison between different commercial RTOSs will be presented at the end of this chapter. There are also real-time operating systems that support both the event-triggered and timetriggered paradigms.7 Event-triggered Real-Time Operating Systems Most commercial real-time operating systems are priority-driven. or at run-time (dynamic priority assignment). the waiting time can be calculated and guaranteed (which is not the case in a general-purpose operating system). Since all decisions about task execution. “at time 8 run task B until 12”. Synchronization between tasks is usually done in asynchronous way. tasks are executed according to a schedule determined before the execution. a bus company makes a schedule for a bus that is valid during some time period. Here we describe some of the most common services provided in an event-triggered RTOS. in an RTOS. but discus some general mechanisms common for most of them. synchronization and communication are made before run-time. that's why we will put the emphasis on event-triggered RTOSs in this book. and then. In a time-triggered RTOS. a task with assigned higher priority will be able to preempt the execution of currently running lower priority task. You can compare this type of RTOS with a bus schedule. . we just follow the schedule. 2. Preemption and context switch In most event-triggered real-time operating systems. it is not always guaranteed that this order will be preserved at run-time. Time acts a means for synchronization. the schedule could say “at time 5 run task A until time 8”. etc. This is exactly what happens in timetriggered real-time systems: we use some scheduling algorithm to create a schedule before putting the system into use. For example. However.

  Figure 6 illustrates what happens when preemption occurs. 2. such as program counter and status register. must determine whether the currently running task should continue to run. and some registers. 4.. start address of the task code. Determine which task should run next. and when the highest-priority task τ3 gets ready. it is used in combination with other events in the system. Each time the priority-based preemptive RTOS is alerted by an external world trigger (such as a switch closing) or a software trigger (such as a message arrival). an external interrupt occurs etc. a task gets blocked. it will preempt the lower-priority task τ1. task state. These steps are together called task switching (or context switch). it preempts τ2 (and hence. When task τ2 becomes ready to execute at time t2. If not. e. • . Set up the running environment of the task that will run next. such as a new task is released.τ1 preempts τ2 High-priority task τ3 τ2 preempts τ1 Middle-priority task τ2 t3 t4 Low-priority task τ1 t1 t2 t5 time   Figure 6: Preemption between tasks. Program code – the binary representation of the code to be executed by the task. We see in the figure that the RTOS will stop the execution of a task if there is a higher-priority task that wants to execute. Rather. Such a delay would be unacceptable in most real-time systems. Save the environment of the task that was stopped (so it can continue later). the following steps are made: 1.g. Task structure A real-time task in an event-triggered RTOS consists of: • Task Control Block (TCB) – a data structure that contains task ID. most real-time operating systems do not rely on system clock scheduling alone. For this reason. which could be tens of milliseconds apart. the Clanguage. The time it takes to do task switching is of interest when evaluating an operating system. 3. which was originally implemented in some programming language. A general-purpose operating system might do task switching only at timer tick times. indirectly preempts even τ1). Allow this task to run.

• Data area – stack and heap TCB . In addition.. its context changes continuously. the ready-queue).Program counter Program code main(){ init.                                                               EPROM: Erasable Programmable Read‐Only Memory is a type of memory that can be erased and re‐written.g. The TCB has pointers to the task code and data area..  usually by using ultra‐violet light. wait(. Separation between task code and task data is done because we want to be able to store the code and the data in different types of memories. especially in system-on-chip computers... We want to add an additional PID controller. it instantiates the Task Control Block and uses the structure to keep all the information it will need to manage and schedule the task. as illustrated in Figure 8. In embedded systems the task code is usually stored in an EPROM3. the resources are usually limited. Another reason of separating program code from the data area is to be able to reuse the same code for different tasks. we mean that it inserts a pointer to the TCB of the task into the queue. assume a PID4 controller that has been used and tested for a while. with different control parameters and periodicity..  3 .g.Staus register . For example.). not allowing the code to use read. the kernel keeps its context at the time in the task's TCB. It attempts to correct the error between a measured process variable and a  desired setpoint by calculating and then outputting a corrective action that can adjust the process accordingly  and rapidly.and write memory. } } … Data area   Figure 7: A structure of a task When a real-time kernel creates a task.. it allocates memory space to the task and brings the code to be executed by the task into memory.  4  PID: A proportional–integral–derivative controller is a generic control loop feedback mechanism widely used  in industrial control systems. that will perform the same action but with different input values. see Figure 7. e. to keep the error minimal. loop forever { . The kernel terminates a task by deleting its TCB and deallocating its memory space. In this case we can use the same program code for both PID controllers. In such systems.Task state . When a task is executing. where the objective is to avoid usage of external memory as much as possible.. When the task stops executing.Task ID . When we say that the RTOS inserts a task in a queue (e.

the code must be pre-emptable in the middle of the execution without any side effects. *x = *y. task H becomes ready and preempts task L. and L starts to execute. wait(.). Assume H does not want to execute for the moment.. Example: Is the function swap that exchanges the values of two variables reentrant? int temp.Staus register .  Reentrant code To be able to reuse the same code in different tasks. as in PID example above. *y=temp. The reason for this is usage of a global variable temp. int swap(int *x. During its execution. H will change the value of temp. i.Task state .Program counter Data area (PID 1) Data area (PID 2)   … Figure 8: Shared program code. }   The answer is no. The whole scenario is illustrated below.Staus register .e. Assume a low priority task L and a high priority task H that both use function swap.. the value of *y will be wrong. After L has executed the code line temp=*x.Task ID .Task state . int *y){ temp = *x.TCB PID 1 Parameters (PID 1) .. A reentrant piece of code can be simultaneously executed by two or more tasks.. the code must be reentrant.Task ID .. function swap is not reentrant. . loop forever { . } } … TCB PID 2 Parameters (PID 2) .. so when L resumes its execution again.Program counter Shared program code main(){ init.

&t).. we can identify the following states for a task: . Note that at this point temp is still set to 3 (since it is a global variable). swap(&x. } When swap() is interrupted. There are several ways to avoid this problem and make the code reentrant.task_L(){ int x=1. z=4 and t=3).g. Reentrant code cannot use global variables without protection.g. In general.. it calls a sleep function that will put in the waiting state. swap(&z. After finishing its execution.. so that task H cannot preempt it. /* temp=3 */ *z=*y. Task states A task changes its state during its lifetime: e. /* y=3 (WRONG! it should be 1) */ <------------. or to protect the global variable temp from simultaneous access e. &y). /* z=4 */ *y=*temp. y=2. /* x=2 */ *y=temp. a task that executes can become blocked by another task that uses a shard resource. -------> temp=*z. temp contains value 1.t=4. -------------> temp = *x. such as to declare temp as a local variable. disable interrupts until task L is done. task H gives the control back to the low priority task L. by using semaphores (we will talk about semaphores later in this chapter). Another example is that when a task is done with its execution.e. which is obviously wrong. When L resumes execution. it sets its local variable y to 3 instead of 1. /* temp=1 */ (H preempts L) --------------> task_H(){ int z=3.. /* y=3 */ <------} <------------(L continues) *x = *y. otherwise different tasks may update the same memory location in non-deterministic order. which is then resumed.. The high priority task H sets temp to 3 and swaps the contents of its local variables correctly (i.

The preempted task will then go to the ready state (where it has to compete again with all other ready tasks). or become dormant. with higher priority than the currently executing task.Executing Ready Dormant Waiting Blocked Figure 9: Task states and state transitions. ready and waiting. This is summarized in Figure 9. Blocked – A task is blocked when it released but cannot continue its execution for some reason. has become ready. .. Ready Æ Executing: The task that has the highest priority among all ready tasks at the moment will start to execute. when it wants to execute. but in any kernel that supports execution of concurrent tasks on a single processor. i. and become executing itself. there are at least states executing. Executing Æ Ready: If another task.g. the task will become ready again. Waiting – A task enters this state when it waits for an event. The next obvious question we can ask ourselves is which state transitions are valid. timeout expiration. it may be blocked waiting for a shared resource to be free. Not all states have to be supported by an RTOS. or a synchronization signal from another task. Only one task can be executing at a time (on a single-core processor). Executing Æ Waiting: An executing task becomes waiting by e.    Dormant – This state means that the task is not yet consuming any resources in the system. Dormant Æ Ready: This transition occurs when a task is activated... A ready task cannot gain control of the CPU until all higher priority tasks in the ready or executing state either complete.g. Ready – By entering the ready state. For example. Executing – A task enters this state as it starts to run its code on the processor. invoking a system call sleep at the end of its execution in the current period time. The task is registered in the system but it is either not activated yet or has terminated. a task expresses its wish to gain access to the processor.e. The before mentioned sleep function will put a task in a ready state. When the waiting time has elapsed. e. it will preempt the current task.

finds a task unblocked. If we. i. If we jump from t1 to t2. all transitions to the executing state go through the ready state.. the RTOS increases the system time. The kernel invokes the scheduler to update the ready queue whenever it wakes up or releases a task. and sorted to execute based on their priorities. Why can't it enter executing state directly? The answer is: there might be other tasks in the system that also are ready.Executing Æ Blocked: An executing task becomes blocked when it comes to a point in its execution that it cannot proceed. creates a new task.. getTime() – get system time setTime(t) – set system time adjustTime(t). t2]. • At external interrupts. since it cannot get the access to a necessary resource that is locked by some other task. Thus. however. Executing Æ Dormant: When a task has terminated or completed its execution. For example. it must go via the ready queue for the same reason as discussed above. it becomes dormant. Waiting Æ Ready: When a task has spent the desired time period in the waiting state.e. the task can proceed to execute. sleep. When does a state transition occur? The answer is: • At system clock timer interrupts. Assume that. Tasks in this state may be destroyed. task switch occurs. Hence. Blocked Æ Ready: When the resource that caused a task to become blocked has been freed.e. while it voluntarily enters the waiting state. but consider the following case: assume that we have scheduled a number of tasks to run at different clock ticks that belong to a time interval [t1. all scheduled tasks in the interval will be released at once. This will create a temporary overload in the system that can crash it. it will adjust the time in discrete steps. if there is a ready task with higher priority than the currently execution one. when an interrupt routine invokes a system call that causes a task switch. it will either speed up or slow down the system clock during a certain period until we reach the desired time. at time t1 we call setTime(t2). However.. It is important to note the difference between blocked and waiting – a task is forced to enter the blocked state. but why do we need adjustTime(t)? Can't we just use setTime(t) if we want to change the system time? Yes. Let’s have a look on some functions that operate on the system clock. • When a task invokes a system call such as e. Time handling functions We mentioned before that an RTOS has a system clock for scheduling of activities. i. and so on.g. i.e. and some of them might have higher priority. in many cases we can..adjust system time The first two services are pretty self-explanatory. This will avoid multiple simultaneous task releases. a task is placed in the proper place in the ready-queue as soon as it becomes ready. all ready tasks are transferred in the ready queue. use the adjustTime function. compared to different release times if we do not set a new time. At each clock tick. . and then checks if there are any task transitions to be made. it goes back to ready.

.. The timestamp value will be correct.timeStamp = getTime(). we record at which point in time the reading occurred. Task execution a) time preemption High-priority task b) read(sensor) getTime()   Figure 10: Wrong timestamp due to preemption.. the recorded time stamp can be much later than the actual reading took place. this is a quite drastic solution because disabling interrupts means also not being able to respond to external events within that time.. as depicted in Figure 10-b? In this case. a. a.e.Another problem that you should be aware of when using time functions is a risk for wrong timestamps.. i. . Consider a task where we read some sensor data and timestamp it. i. since we will read the data at a certain point in time that differ from the one recorded.e. but before timestamping it. because of the preemption which delayed reading of current system time.. Assume that the task code looks like this: void Task_T() { struct a. which may be a ..  read(sensor) getTime() time One possible solution to this problem could be to disable interrupts while reading the sensor data and timestamping it.value = read(sensor). the timestamp will not reflect the actual reading time. However. }   Figure 10-a illustrates the case when the task executes without any interruption. But what if the task gets preempted after reading the sensor value.

Priority inversion must absolutely be avoided in real-time systems. in the scheduling chapter.problem in case of urgent events.. we can say that semaphores are used for synchronization between tasks by providing mutual exclusion when several tasks are accessing same resources.. Here is a code example on how to use semaphores: void Task_T(){ . /* critical region exited */ } else /* failed to lock semaphore S */ . unlockSemaphore(S).timeStamp = getTime(). .value = read(sensor). In the timestamp example above. such as semaphores. since it introduces nondeterministic delays on the execution of critical tasks. In more general terms. A semaphore is a data structure used for protection of critical regions. a. } The typical semaphore mechanism used in traditional operating systems is not suited for implementing real-time applications because it is subject to priority inversion. We will talk about those real-time resource access protocols later. Priority inversion can be avoided by adopting particular protocols that must be used every time a task wants to enter a critical region. which occurs when a high-priority task is blocked by a low-priority task for an unbounded interval of time... Mutual exclusion means that only one task is using the resource at a time. which will be described next: Semaphores A critical region is a sequence of statements in the code that must appear to be executed indivisibly (or atomically). if (lockSemaphore(S)) { /* try to get semaphore S */ /* critical region entered*/ a. the critical region contains both reading the sensor value and putting a timestamp on it. A better solution would be to protect the code with some resource access mechanism.

This way. Hardware External interrupt HAL Application Task that handles the interrupt Interrupt routine   Figure 11: A technique for handling interrupts. sensor interfaces. periodically checking if any new event occurred. Interrupt handling is integrated with the scheduling mechanism. rather than allowing random interrupts to preempt high priority tasks. since the interrupts are handled when they occur (no polling). Allow all external interrupts. the only purpose of each driver is to activate a proper task that will take care of the device management. . Hence. The main disadvantage of this approach is low processor efficiency on I/O operations. In real-time systems. this approach may cause some hard task deadlines to be missed. The objective of the interrupt handling mechanism of an RTOS is to provide service to the interrupts generated by attached devices. we control the execution order by assigning priorities to tasks. In classical operating systems. Manage external devices by dedicated kernel routines – All external interrupts are disabled.. This service consists of the execution of a dedicated routine (device driver) that will transfer data from the device to the main memory or vice versa. interrupts generated by external devices can cause a serious problem for predictability of a real-time system. which have direct access to the registers of the interfacing boards. etc. This can be done by using one of the following techniques: Disable all external interrupts – This is the most radical approach. A major problem of this approach is that the kernel has to be modified when some device is replaced or added. data transfer takes place through polling i. The advantage of this approach with respect to the previous one is that all hardware details of the peripheral devices can be encapsulated into kernel procedures and do not need to be known to application tasks. Besides. Thus. since they can introduce unbounded delays in task executions. the interrupt handling mechanisms must allow the most critical tasks to execute without interference. due to the polling. however. since the device handling routines are part of the kernel. this approach has high CPU efficiency on I/O operations. an application task can have a higher priority than a device handling task. see Figure 11. so that the task that handles an interrupt event can be scheduled as any other task in the system. at any time. serial ports. where all peripheral devices must be handled by the application tasks.Interrupt handling If not handled properly. but the devices are handled by dedicated kernel routines rather than application tasks. in real-time systems. Since no interrupt is generated. but reduce the drivers to the least possible size – According to this approach. application tasks can always be preempted by drivers. such as the keyboard.e.

in worst case. For example. Event that causes interrup occurs Interrupt routine RTOS kernel Task time Interrupt handling time   Figure 12:  Interrupt latency. As mentioned before. we must know the longest time the RTOS can disable interrupts. Low interrupt latency is not only necessary for hard real-time systems. or with a single assembly language instruction. is equal to the interrupt latency. most I/O chips allow a program to tell them not to interrupt. This because the handling of the interrupt can be delayed for the amount of time that. microprocessors allow your program to tell them to ignore incoming signals on their interrupt request pins. Even in soft real-time systems. then we cannot use that RTOS. In order to have reasonable soft real-time performance (for example. This is illustrated in the example of Figure 12. the task will not be able to handle the interrupt until the kernel has enabled interrupts again. which is known as interrupt latency. In other words. Interrupt latency is one of the most important factors when choosing RTOS for an application. If there is an interrupt in the system that must be served faster than the length of time the interrupt is disabled in the RTOS. usually in a variety of ways. the interrupt latency caused by every device driver must be both small and bounded. In that case. it is needed for reasonable overall performance. or. performance of multimedia applications). disabling interrupts in real-time systems might lead to serious consequences. A task that handles an interrupt gets preempted by the RTOS kernel with an interrupt disable just before the interrupt occurs. In this type of system the question how fast the system responds to each interrupt is crucial.Almost every system allows you to disable interrupts. by either writing a value in a special register in processor. particularly when working with processing of audio and video. RTOS interrupt latency .

the kernel itself re-initializes such a task and puts it to sleep when the task completes.. there must exist some other mechanisms available in those RTOSs to implement periodic tasks explicitly. however. with a specified period time interval between two consecutive invocation (e.g.e. do not have implicit mechanism for periodic tasks at the kernel level. void task_τ(){ . /* do task work */ . at user level... The task can be re-invoked infinitely. i. .. /* kernel takes over when the task is done */ } Most commercial RTOSs.. .Task execution mechanisms Most real-time tasks are periodic. Since many real-time tasks are of periodic nature (e. period Task τ (and so on…) instance k+1 instance k+2 instance k time Instance k is invoked Instance k+1 is invoked Instance k+2 is invoked   Figure 13: Periodic task instances... An individual occurrence of a periodic task is called task instance (also known as job)..e... sampling). moves to the ready queue) the task again at the beginning of the next period. i. and the kernel keeps track of the passage of time and releases (i. they perform the same computation again and again.g. It is very similar to regular function calls that are called repeatedly. We just need to assign period to tasks before runtime. It is clearly inefficient if the task is created and destroyed repeatedly every period.e. or it can be terminated after a finite number or instances. see Figure 13. under entire lifetime of the system.. reading of a sensor value each 100 milliseconds). Here is an example: int period_time = 50. In an operating system that supports periodic tasks.

it takes 10 clock ticks to execute the task.g.. then the next instance will be released at 10+50 ticks...We can implement a periodic task at user level as a thread that alternately executes the code of the task and sleeps until the beginning of the next period.: . providing relative or absolute delays.e. and no lower priority tasks can execute (higher priority tasks can preempt it). the task would run all the time. at the end of its execution a task suspends itself for some time (goes from executing into waiting state). In other words.. The sleep-function can be implemented in different ways. has elapsed. However. If. which in not what we want. the task does its own re-initialization and keeps track of the time for its own next release. . allowing lower priority tasks to use the processor.g. while(1){ . Relative delay means that the next instance will be released when specified time. In other words. /* do task work for 10 clock ticks */ .. it will put the task into waiting state for a number of clock ticks specified by period_time. sleep(period_time-10). and sleepfunction makes sure that there will be some time interval between consecutive invocation. i. since it does not take into the consideration the execution time of the task. . e.... /* wait some time and re-invoke */ sleep(period_time). for example. i.. We can solve this by subtracting the execution time from desired period time. The sleep-function in the code example above provides a relative delay. an alert mind will notice directly that the implementation above will not really achieve the desired period time. } } /* do forever */ /* do task work */ The infinite while-loop ensures that the task instances will be invoked repeatedly. 50 in this example. Without sleep. e. relative to the call time.: void task_τ1{ int period_time = 50..e. consuming all CPU time.

not desired 50 ticks.The solution above is not general. which can be done by using timestamps... as follows: void task_τ1(){ . The system call used to provide absolute delay is usually called sleepUntil. /* do task work */ . such a higher-priority task τ2 that has an execution time of 20 clock ticks. sleep(period_time-(stop_time-start_time)). delayUntil or waitUntil. as illustrated in Figure 14. or if it has the highest priority. Task τ2 wait_time=40 Task τ1 time t t+30 t+70 Next instance of τ1 released Figure 14: Relative delay. the period will be 50 only if the task is either the only one in the system. Otherwise. } } An absolute delay will suspend the task execution until the system clock has reached specified time (counted from start of the system). Preemption after 10 ticks of execution will cause τ1 to invoke the next instance after 70 ticks. . for example. if there are other higher priority tasks in the system.. we need to include the preemption time in the execution time of τ1. Assume.   Hence.. stop_time = getTime(). they might preempt the task just before calling the sleep-function. Here is an example how we can implement the task above by using absolute instead of relative delay. while(1) { start_time = getTime().

(i. the time between executions of its instances will vary between 8 and 12 clock ticks. period_time = 50.e. t+100.. t+150. next_time = getTime(). } } If the tasks starts to invoke its instances at time t. τ2 period= 8 period =12 τ1 0 2 4 6 8 10 12 14 16 18 20 22 24   Figure 15: Jitter in periodic execution.20. caused by high priority tasks.. based on if preemption from τ2 occurs or not. Consider the following example.10. τ2 is released at times 0. and the period time 4 and 10 respectively. etc. regardless if the task gets preempted or not. Figure 15 shows what happens if both tasks are released at the same time. There can be variations in the actual execution. then all consecutive values of next_time will be t+50. while(1) { /* do task work */ . τ1 and τ2.void task_τ1(){ . Assume also τ1 has higher priority than τ2. Jitter We have showed above how to implement periodic tasks with help of relative and absolute delay.etc). sleepUntil(next_time). however.e. Those variations are called jitter. does not necessarily mean that the distance between execution of consecutive task invocations will be constant.. with the execution time 2 and 1. initial next_time is equal to t. . assume two periodic tasks. i. Correctly calculated period.... Although we have defined the period of τ2 to be 10. next_time = next_time + period_time.

as illustrated in Figure 16. tasks can get jitter. The application developer must make sure that the data access is atomic. It provides a low-level. at different speeds. we could see in the swap-example above that the preemption in the middle of the operation can cause wrong values to be assigned to variables that are to be swapped.. the temperature-task writes "10" and. Communication and synchronization mechanisms Often. . The simplest way of the communication between tasks is through a shared memory.. which would result with the display output being: "1023:15". If we do not protect the access to the display device.g. but may need to interact with each other. Another difficulty is to synchronize accesses to shared memory. or to access shared resources. But that is not easy.g. to communicate data to each other. Task 1 write Shared variable v read Task 2   Figure 16: Communication through shared variables. For instance. where each communicating task may update pieces of shared information/data. The disadvantage with this approach is data overwrite. These mechanisms are necessary in a preemptive environment of many tasks. i. the old values are overwritten by the new ones since no buffering is provided.. Another example is two tasks sharing the same display.. "23:15". i. the only task for which we can guarantee jitter free execution is the highest priority one. We can. still under condition that no external interrupts take place. for example.. i.e. In all other cases. "10o C". Most realtime operating systems offer a variety of mechanisms for handling task interactions. For example.The objective is to minimize jitter for each task (ideally jitter=0). it might result in strange display caused by one task preempting another one. It is commonly used for communication among tasks that run on one processor. high bandwidth and low-latency means of inter-task communication.g. use semaphores to protect the access to the shared resource and solve the problem. tasks execute asynchronously. the better periodicity of a task's execution. before writing the rest of the display text.e. the second task preempts it and writes its own text "23:15". while the second one displays the current time e. The smaller the jitter. because without them tasks might communicate corrupted information or otherwise interfere with each other.e. e. We will show later in the scheduling chapter how we can calculate the effect of jitter when predicting the system behavior. where one task measure the current temperature of the air and displays it as e. Communication through shared memory is easy and efficient way of communication. as well as among tasks that run on tightly coupled multiprocessors.

tasks must not be interrupted while updating the shared memory space. One extra buffer is added to assure there always exists one free buffer in the WLFC. no one of the readers is delayed by another task (compare it to the shared variables approach. When task τ2 starts to read the data from slot 1. /* exit critical region */ unlockSemaphore(S). which is a method to accomplish non-blocking communication between tasks.. /* enter critical region */ v = getValue()..and lockfree channel.. . /* enter critical region */ local = v. task τ1 continues writing to slot 3. Task τ1 starts by writing some data to buffer slot 1. while(1){ .... lockSemaphore(S)... This is done by assigning one buffer to each reader of the WLFC. This way. /* exit critical region */ unlockSemaphore(S). lockSemaphore(S). as shown in the code example below: void sender_taks(){ . } } An alternative is to use Wait. Non-blocking means if two or more tasks want to read from a wait. see Figure 17. . where tasks get blocked if some other task is updating the variable). Buffer slot1 Buffer slot 2 Buffer slot 3 (free slot) Task 1 (producer/writer) Task 2 (consumer/reader)   Figure 17: Wait‐ and lockfree communication. This can be achieved by protecting the shared variable with a semaphore. } } void receiver_task(){ … while(1) { . Both tasks get their own buffer slot. there is always one free buffer for writing.and lockfree communication. The formula for calculating the number of needed buffers is: nbuffers = nwriters + nreaders + 1 .. WLFC..

. the message queue needs to be created. Before a task can send a message to another task. . In this approach. Most real-time operating systems use "indirect" message passing. it doesn't have to be the message sender task or the message receiver task. /* get the pointer the buffer */ /* get the pointer the buffer */ /* to read from */ /* to write to */ buff_ptr = readWLFC(buf_ID). This makes it good for communication large amount of data that continuously changes. it is assigned (by the kernel) a pointer to a buffer within the WLFC. it will get the most recently written buffer. it will get the pointer to the first free buffer with the oldest value. messages are not sent straight from task to task. If the task is a reader. pointers to the oldest and the newest values in the buffer. The disadvantage is that it wait.. .Since writers do not share buffer slots... while(1){ while(1){ . A WLFC contains an array of buffers. The idea is that one task will send messages into the queue. . it is the user who is responsible for filling the buffer with data. but rather through message queues. buff_ptr = writeWLFC(buf_ID). Here is an example how wait. But..and write functions return a pointer to the buffer to operate on.. perhaps later on. Message passing is the most popular technique for transferring data between tasks in a multitasking software environment. there is no need for atomic write operation. ... and then. see Figure 18. Message queue Task 1 Task 2   Figure 18: Communication through a message queue. Every time a task becomes READY (due to a new period).and lockfree communication requires more memory than shared variables.... both the message sender and message receiver tasks need to be informed of .. } } } } Both read.and lockfree communication can be used in task code: void task_Consumer(){ void task_Producer(){ . and if the task is a writer. Since the buffers are user-defined. Any task can ask to create the queue. and a list of all tasks that can use the buffers. another task will fetch the messages from the queue.

. Embedded systems usually have very limited memory and CPU resources that should be used in the most efficient way.. the receiver task removes it from the queue. /* receive message */ receive(MSGQ. has a period time 500. The sender task. some RTOSs use priority queues instead. However.. Assume two tasks that communicate through a message queue. Since the receiver task has higher priority it will preempt the sender task whenever both are ready at the same time. and the execution time 1200. . i. For example. Here is an example code for inter-task communication via message queues: void task_Sender(){ .. The receiver task reads two messages in each instance. in order for them to communicate through the queue. /* send message */ if(send(MSGQ. A message queue is usually implemented as a first-in-fist-out (FIFO) queue. How must the message queue be dimensioned? To be able to answer this question..g. } } A message queue can either be global or local... i. So.. when allocating memory for the message queues we should be careful not to waste more memory than necessary. we need to allocate memory to store the messages in the queue. while(1){ . it will be received faster than lower priority messages.&msg)) . has the period time 300 and the execution time 100.e. τ2. A common problem when building embedded systems is the lack of memory...... while(1){ .. while local is connected to a specific pair of tasks (sender and receiver). } } void task_Receiver(){ . as soon a message arrives.e. there is no point in allocating memory for 50 messages if the queue will maximally contain one message at a time. called a queue identifier. which results in faster de-queuing. Global means that all tasks in the system can read messages for the message queue. which is a better choice. The sending task can specify the priority of its message. τ1. The receiver task. Here is an example..msg)) /* message sent */ else /* something is wrong */ /* e. The sender task sends three messages to the receiver task in each instance. we need first to look how the sender and the receiver task interleave during their execution. When creating message queues for inter-task communication. queue full */ .the identity of the queue.

We can use the following table to illustrate what happens during the execution: Time 0 100 300 500 600 900 1000 1200 1500 m7. How long should we analyze the trace? The answer is until the next point in time when the tasks are released at the same time. m4.e. in which both tasks are released simultaneously at time 0. τ2 runs and reads one message Message queue after execution m1. m6 m5. This interval is also known as hyperperiod. m5. which is easily obtained by calculating the least common multiple (lcm) of the task periods. m3 m3 m3. We need to analyze the trace for worst-case scenario. It reads two messages Second instance of τ1 executes. i. m3 m3 m3.500) = 1500. m9 m9 m1. No messages to read yet τ1 starts and sends 3 messages Second instance ofτ2 runs. m6 m7. m8. it is lcm(300.The execution trace is shown in Figure 19 . It reads two messages Fourth instance of τ2 runs. m2. hence we just need to consider the trace up to the lcm of the task periods. both ready both ready again τ2 300 600 900 1200 τ1 0 100 500 700 1000 1500   Figure 19: Example communication via message queues. m2. m8. m6 m5. It reads two messages Star of new hyperperiod. m9 m9 .. After time 1500. m4. It sends three new messages. m6 Message queue before execution Task execution τ2 starts (high priority). the execution pattern of the tasks will be exactly the same as the one between 0 and 1500. It reads two messages Third instance of τ1 runs. m5. It sends additional three messages Third instance of τ2 preempts. Fifth instance of τ2 runs.

what to do in real-time systems? Real-time operating systems offer non-fragmenting memory allocation techniques instead of heaps. and free it when done by calling free. For example. This fragmentation problem can be solved by so-called garbage collection (defragmentation) software. The famous malloc and free services. Memory management Many general-purpose operating systems offer memory allocation services from what is called a heap. useful buffer sizes. Signals can be sent directly to a specific task. non-deterministic delays in the heap service. If sent directly. So. In other words. in order to carry out the required activities. then we need to include the receiver task in the sendSignal(. This will eventually result in situations where tasks will ask for memory buffers of a certain size. or to another task. tasks can temporarily borrow some memory from the operating system’s heap by calling malloc. This fragmentation is caused by the fact that when a buffer is returned to the heap.causes the task to suspend activity as soon as the wait operation is executed. and they will be refused by the operating system.We see that the maximum number of messages contained in the queue at any given time is four messages. . use heap. Heaps suffer from external memory fragmentation that may cause the heap services to degrade. Synchronization between two tasks can be implemented by the following service calls: • sendSignal(event) – sends the fact that an event has occurred.) call. making it unsuitable for real-time system (where we want to be able to predict all delays). Tasks synchronize in order to ensure that their exchanges occur at the right times and under the right conditions. which we already discussed. see Figure 20. Another way of implementing synchronization is to use semaphores. hence it is enough to dimension the queue to be able to contain four messages. the pools memory allocation mechanism allows application tasks to allocate chunks of memory of perhaps 4 or 8 different buffer sizes per pool. Unfortunately. garbage collection causes random. and it will remain suspended until notification of an event is received. waitSignal(event) . They do this by limiting the variety of memory chunk sizes they make available to application tasks.. known to C-language programmers. Its action is to place • event information in a channel or pool. even though the operating system has enough available memory in its heap. a task may need to have the ability to say "stop" or "go" or "wait a moment" to itself. but they cannot be merged into bigger. This will result in small fragments of memory appearing between memory buffers that are being used by tasks. These fragments are so small that they are useless to tasks. or they can be sent as a broadcast to all tasks in the system. it may in the future be broken into smaller buffers when malloc requests for smaller buffer sizes occur. This in turn may enable a waiting task to continue.

The interface to application software should not contain any specific details about the underlying hardware device. Memory is allocated and de-allocated from a pool with deterministic. Pools avoid external memory fragmentation. because it should be possible to replace the device without changing the application software. where the probability of replacing a hardware component in the future is high. when a buffer is returned the pool.Pool 1 Pool 2 Pool 3 Same block size … … …   Figure 20: Memory allocation through pools. it is a good idea to encapsulate all hardware-depended software into device drivers.  . timing. Instead. INIT OPEN Device driver READ WRITE CLOSE Task   Figure 21: Example device drive interface. Device drivers When constructing larger systems. by not permitting a buffer that is returned to the pool to be broken into smaller buffers in the future. it is put onto a “free buffer list” of buffers of its own size that are available for future reuse at their original buffer size. Device drivers provide an interface between software and hardware. often constant. Figure 21 gives an example of a device driver for a circuit that send and receives serial data (a UART). they manage hardware devices and they have more privileges than regular tasks.

Driver activates the hardware circuit by sending a write request interrupt (d3). The device is exclusively reserved by calling OPEN. and puts it into the output buffer (d2). CLOSE is called. which reads the character from the output buffer (c2) and puts it into the hardware register (c3). As we can see in the figure. If no buffering is needed. Write – The driver receives a write request from the task (d1). Device driver a3 a1. which releases the device and makes it available for other tasks. If there is any new incoming character in the input buffer. an interrupt is generated (a1 in the figure). the number of start and stop bits. the interrupt routine reads the character (a2). and even or odd parity. We will show now what happens upon read and write operations. The characters are read and written by using calls READ and WRITE.or 8-bits characters. and puts it into the input buffer (a3). the driver will read it (b1) and deliver it to the application task (b3). 7. Figure 22 depicts a possible implementation of the device drive in the example above.c1 In-buffer b2 b1 HWcircuit a2 c3 Interrupt routine d3 c2 INIT OPEN Driver b3 d1 READ WRITE CLOSE Task Out-buffer d2   Figure 22: Example implementation of a device driver. Then. If there is no new character (b2) the driver will wait until it arrives. typically with some input parameters that define the speed of sending bits. Read – When a character is received by the hardware circuit. The circuit then starts the interrupt routine (c1). A device driver can be implemented in several ways. the interrupt routine communicates with the driver via buffers. If it is supposed to be able to do buffering. a device driver is usually implemented in one or several tasks. The interface to the device driver can be implemented something like this: . it can be implemented only by using semaphores. so that no other task can use it at the same time.When the system is started. INIT is called. When we are done with the sending.

/* send a string to the driver */ . } void task_DD(){ while(1){ /* The driver just reads all received strings and prints them */ receive(MSGQ. } void DD_write(char *str){ /* A task sends its strings to the driver via a message queue */ msg. } } /* release driver when done */ /* try allocate the driver */ DD_write("Print this.")..msg).void user_task{ . return FAILED.text = str. } int DD_close(){ if(unlockSemaphore(S)) return OK.").. return FAILED.. while(1){ if( DD_open() ){ DD_write(".and this..msg) printf("%s".. DD_close()... ....text).msg./* initiate the device driver (DD) */ . DD_init().. send(MSGQ.... } else /* Driver is currently used by some other task */ . } } int DD_open(){ if(lockSemaphore(S)) return OK.

see Figure 23. cycle (Schedule is repeted) τ1 τ2 τ3 τ4 τ1 τ2 τ3 τ4 time   Figure 23: Time‐triggered execution. at time t=12 run task B etc. see Figure 24-b. Task chains make it possible for several tasks to execute within one system clock tick. There are two ways of implementing time-triggered systems. that the system behavior is correct. One reason for introducing support for time triggered execution is design of safety critical systems. The simplest approach is to activate only one task at each clock tick. but that will also result in increased overhead to handle ticks (more clock interrupts). there is another type of RTOS: time-triggered real-time operating systems. since only one task is released per tick. The problem is.8   Time-triggered RTOSs So far we have mostly talked about event-triggered systems. where all activities are carried out at certain points in time which are known a priori. However. As I mentioned before. This will result in poor utilization (usage) of the system. since the rest of the tick will be unused. for which it must be possible to prove. as illustrated in Figure 24-a. ..2.g. the next task in the chain released at once. Whenever there are several tasks chains ready to execute. So. Many control systems require timely execution which can be guaranteed pre-run-time. Verification of correctness is facilitated with the time triggered approach due to the reproducible behavior (the execution order is static). or at least show. if a task has a very short execution time (shorter than the length of the clock tick). while the chain with the oldest start time gets the lowest priority. at time t=5 run task A. when the first task in the chain has completed its execution. this type of RTOS is most common in the industry since most of the commercial RTOSs are eventtriggered. it will still allocate an entire clock tick. and we release a sequence of tasks (chain) instead of just one task in a tick. A schedule is a table that is created before the start of the system and during run-time it repeats itself after some time (cycle time). the one with the latest start time gets the highest priority. A better approach is to allow activation of several tasks per clock tick. Another advantage of this approach is easy implementation of preemption. at predefined times e. We could increase the clock resolution. without waiting for the next clock tick. Task execution Tasks in a time-triggered RTOS are activated according to a time table (schedule). Tasks are defined as successors to each other.

Communication and synchronization With time-triggered scheduling there is no need to worry about concurrency control. those tasks cannot access a shared resource at the same time. a task in a time-triggered system is just a function. For example. no two trains can reside at the same railway section.. execution time). On the other hand. sporadic (non-periodic) activities really mess things up – especially short-deadline ones. and the application programmer needs only to write the task code. tasks share the same memory stack. we simply construct the schedule so that the execution of conflicting tasks is separated in time. a poll in each 1 ms slot is necessary. Time-triggered scheduling is also easy to implement. before system starts to run. . Furthermore. The user does not need to worry about mutual exclusion and concurrency control since all conflicts are resolved in the schedule. period.Taskc chains unused time 0 1 2 3 0 1 2 50 a) One task per clock tick b) Several tasks per clock tick   Figure 24: Example task scheduling in time‐triggered systems. an event may occur at most once every ten seconds. a task that preempts another task must terminate and clean its data from the stack before the preempted task can be resumed again. This wastes a lot of CPU time. while being good for cyclic tasks. Another drawback of time-triggered approach in poor flexibility to include new activities (tasks) in the system. This has to be handled in the system by polling. Task structure From the user point of view. We will talk more about scheduling of both time-triggered and event-triggered systems in the scheduling chapter. Once a schedule is made. and to meet a 2 ms deadline in the example system.e. We simply separate access to shared resources by time or we put tasks that share resources in the same chain. which results in a very memory efficient system. Usually. since due to the construction of the time table. but need to be handled within 2 milliseconds. it is usually fixed and if we want to add something we need to reschedule the entire system. This way. In order to make this work. there must not be any blocking primitives (like semaphore locks). We can make a parallel from day-to-day life: time tables for trains Assuming no delays or break downs on trains can occur. if two or more tasks are accessing the same resource. All conflicts between tasks are resolved in the schedule. All task parameters are defined in the schedule (i. we could eliminate the whole signaling system for trains. For example. Tasks run sequentially one after the other and therefore mutual exclusion is guaranteed. For many simple systems this kind of scheduling approach is perfect.

Priority ceiling and priority inherence protocols are supported to avoid priority inversion problem. Supports most modern CPUs such as MIPS. Application-level control over interrupt handling. symmetric and asymmetric multi-processing (SMP & AMP). Event-triggered RTOSs VxWorks – This is one of the widely-used RTOSs in the market. and Hitachi SuperH processors. Preemptive scheduling and priority inheritance are supported. QNX Neutrino is the latest version and the source code is available from the 2007 onwards. threads. It is scalable from constrained embedded to multiprocessor platforms. which is developed by Wind River systems. It supports many popular hardware platforms and had been used in several diverse applications over past two decades. ARM. synchronization.2. It supports multi tasking with 256 priority levels and has deterministic context switching. IPv6 network stack. pSoS – This OS is built around the concept of object orientation.9 Example commercial RTOSs Here we provide a brief summary of the features of some of the popular real-time operating systems. are dynamically allocated in virtual memory. MIPS. semaphores etc. event-triggered commercial RTOSs. and special development platforms tuned for safety critical domains. Kernel objects like processes. This originated from the research at New Mexico . StrongArm. It supports EDF as well as preemptive priority based scheduling. timetriggered RTOSs and research RTOSs. QNX – This POSIX-compliant RTOS first appeared in 1982. Support for multi-core processors. It is built around a large number of APIs and customizability is one of its strong features. supervisory mode execution of user tasks and dynamic loading of device drivers are some of the features of pSoS. SH4. xScale. Windows CE – Windows CE is a small footprint kernel and supported on Intel x86 and compatibles. Synchronous message passing. and TCP/IP protocol. for example. Clear separation of kernel mode and user mode and ability to run on 8. OS-9 – Originally developed for the Motorola processors during early 80s. PowerPC. These are presented in 3 groups. and x86.. All threads are enabled to run in kernel mode and slices CPU time between threads. priority-driven pre-emptive scheduling. nested interrupts and fixed upper bound on interrupt latencies are some of the other features. are the key features of latest versions. Typical objects include tasks. semaphores and memory regions. It supports 256 priority levels and priority inheritance. guarantee a minimum CPU budget to the user interface of a device. Centered around a minimal micro kernel and a host of user servers which can be shutdown as needed. Adaptive partitioning technology helps system designers guarantee responses to events. RT Linux – RT Linux (or RTCore) is a microkernel that runs the entire Linux operating system as a fully pre-emptable process. Execution time for non-preemptable code is reduced by breaking the nonpreemptable parts of the kernel into small sections. The architecture provides multitasking. viz. 16 and 32 bit processors.

This supports hard real-time operations through interrupt control between the hardware and the operating system. mainly to be used for applications that have hard real-time requirements. Uc/OS. • A communication infrastructure providing efficient communication between distributed devices. • A guaranteed real-time service for safety critical applications. Rubus – The Rubus RTOS has evolved from Basement. and • A program development methodology and tools allowing resource independent and application oriented development of application software. while other interrupts are forwarded to Linux. RTX. The Rubus methods and tools have been used by the Swedish automotive industry for more than a decade. Multiple time bases such as global fault-tolerant TTP time and local time are supported. synchronization to a global time and error detection features to support fault-tolerance. external event triggered execution (interrupts). • A best-effort service for non-safety critical applications. OSEK etc. Dynamic scheduling is utilized for safety critical application as well as for non-safety critical application. First-In-First-Out pipes (FIFOs) or shared memory can be used to share data between the operating system and RTCore. Three categories of run-time services are provided by Rubus OS (each by a kernel with a name matching the color of service): • • Green Run-Time Services. TTP . There are several other commercial RTOSs of this category such as RTEMS. Palm O/S. The key constituents of the Basement concept are: • Resource sharing (multiplexing) of processing and communication resources. XP Embedded. This supports  priority based. which runs at a lower priority than real-time threads. Red Run-Time Services. TTPOS combines a small footprint and fast context switch. DSP/BIOS. Time-triggered RTOSs OS – Based on the time-triggered protocol.a free version and a paid version from Wind River systems. cooperative preemptive scheduling. Interrupts needed for deterministic processing are processed by the real-time core. Deadline monitoring for tasks and interrupt service handlers for aperiodic requests are provided. time triggered execution. To guarantee the real-time behavior of the safety critical application static scheduling in combination with time-triggered execution is utilized.Institute of Mining and Technology and currently available in two versions. a distributed real-time architecture developed in the automotive industry and research at Mälardalen University.

Task τ2 has an execution time 100 ms and a period 300 ms. Amherst with the aim of providing scheduling support for distributed systems. 2. offline (e. Answer the following questions about Real-Time Operating systems: a) What is a real-time operating system (RTOS)? Explain the difference between an RTOS and a general-purpose operating system? b) Each RTOS has something that is called “interrupt latency”. τ1 sends 3 messages to a message queue during each period (i. MARTE – This is a research kernel developed by University of Cantabria. which provides a high level of granularity and a real-time group communication mechanism. It supports both synchronous and asynchronous multicasting groups and achieves predictable low-level distributed communication via globally replicated memory.e.. it sends 3 messages in each instance). Concurrency at thread level (whole program is a single process). MAST) and online (FRESCOR) scheduling are provided. If yes.g. planning and end. to be used for applications that have hard real-time requirements as well as have soft real-time requirements. The safety critical tasks are scheduled using a static table. Assume τ2 has a higher priority than τ1 and τ1 is .to. Tools for supporting waiting synchronization and mutual exclusion. Admission control. Task τ 1 has an execution time 200 ms and a period 500 ms. Spring supports abstraction for process groups. single memory space (threads. What is that? c) Explain at least three different states for a real-time task. It is written in Ada.10 Exercises 1. internal event triggered execution. It provides abstractions for reservation. τ2 manages to read 2 messages under its period (i.e. measuring time. Also.. explain which transitions between given states can take place. reads 2 messages in each instance). driver and OS) and static linking (output is a single bootable image) are main features. follows the minimal Real-Time POSIX. efficient triggering of events.• Blue Run-Time Services. Assume two periodic tasks τ1 and τ2 that communicate to each other by sending messages. It supports both application and system level predictability. motivate why not. This can dynamically schedule tasks based upon execution time and resource constraints. Research kernels Spring – The Spring kernel was developed at University of Massachusetts. d) Explain briefly the mechanisms provided by RTOS to support shared resources. The kernel helps retain enough application semantics to improve fault-tolerance and performance on overloads.13 subset and supports Ada 2005 real-time features.end timing support. e) What does re-entrant code means? f) Can Windows NT be used as an operating system for real-time applications? If no. motivate why yes and give an example real-time application that can run on Windows NT? 2. planning-based scheduling and reflection are notable features of spring kernel.

} /* no sleep */ Task_ τ 3(void){ while(1){ /*do something*/ a) Assign task priorities so that all tasks will be able to execute (i...e.. Assume following three periodic tasks: Task_ τ 1(void){ while(1){ /*do something*/ .allowed to send its messages at any point in time during its execution.. when a task reads a message from the message queue. What is the minimum possible size of the message queue (counted in number of messages) such that we are able to guarantee there will always be enough space in the queue for τ1 to insert its messages? Motivate your answer. Entire situation is depicted below: τ 1 (low priority) period = 500 ms execution time = 200 ms Message queue τ 2 (high priority) period = 300 ms execution time = 100 ms Sends 3 messages during its period (i. in each instance) Reads 2 messages during its period Since we do not want to allocate more memory than necessary for the message queue.e. } } } } Task_ τ 2(void){ while(1){ /*do something */ . the message is removed from the queue. Hint: Think of the system behavior in the worst case. Motivate! b) Give an example when a) is not fulfilled c) If you remove sleep(24) from τ2. Also. It helps a lot if you draw the execution trace for the tasks. we would like to minimize its size.. 3.. sleep(24). sleep(42). which is: both tasks are released simultaneously and τ2 preempts τ1. is it possible to set priorities so that a) is fulfilled? . no task must wait forever because of some other task).