You are on page 1of 2

1. What is rate monotonic scheduling algorithm? What are various assumptions in this algorithm?

Rate Monotonic Scheduling Algorithm


The Rate Monotonic Scheduling Algorithm (RMS) is important to real-time systems designers because it allows one to guarantee that a set of tasks is schedulable. A set of tasks is said to be schedulable if all of the tasks can meet their deadlines. RMS provides a set of rules which can be used to perform a guaranteed schedulability analysis for a task set. This analysis determines whether a task set is schedulable under worstcase conditions and emphasizes the predictability of the system's behavior. It has been proven that: RMS is an optimal static priority algorithm for scheduling independent, preemptible, periodic tasks on a single processor. RMS is optimal in the sense that if a set of tasks can be scheduled by any static priority algorithm, then RMS will be able to schedule that task set. RMS bases it schedulability analysis on the processor utilization level below which all deadlines can be met. RMS calls for the static assignment of task priorities based upon their period. The shorter a task's period, the higher its priority. For example, a task with a 1 millisecond period has higher priority than a task with a 100 millisecond period. If two tasks have the same period, then RMS does not distinguish between the tasks. However, RTEMS specifies that when given tasks of equal priority, the task which has been ready longest will execute first. RMS's priority assignment scheme does not provide one with exact numeric values for task priorities. For example, consider the following task set and priority assignments: Task 1 2 3 4 Period (in milliseconds) 100 50 50 25 Priority Low Medium Medium High

RMS only calls for task 1 to have the lowest priority, task 4 to have the highest priority, and tasks 2 and 3 to have an equal priority between that of tasks 1 and 4. The actual RTEMS priorities assigned to the tasks must only adhere to those guidelines. Many applications have tasks with both hard and soft deadlines. The tasks with hard deadlines are typically referred to as the critical task set, with the soft deadline tasks being the non-critical task set. The critical task set can be scheduled using RMS, with the non-critical tasks not executing under transient overload, by simply assigning priorities such that the lowest priority critical task (i.e. longest period) has a higher priority than the highest priority non-critical task. Although RMS may be used to assign priorities to the non-critical tasks, it is not necessary. In this instance, schedulability is only guaranteed for the critical task set. The schedulability analysis rules for RMS were developed based on the following assumptions:

The requests for all tasks for which hard deadlines exist are periodic, with a constant interval between requests. Each task must complete before the next request for it occurs. The tasks are independent in that a task does not depend on the initiation or completion of requests for other tasks.

The execution time for each task without preemption or interruption is constant and does not vary. Any non-periodic tasks in the system are special. These tasks displace periodic tasks while executing and do not have hard, critical deadlines.

2. Why task synchronization is required in real time operating system? Explain. Most operating systems, including RTOSs, offer a variety of mechanisms for communication and synchronization between tasks. These mechanisms are necessary in a preemptive environment of many tasks, because without them the tasks might well communicate corrupted information or otherwise interfere with each other. For instance, a task might be preempted when it is in the middle of updating a table of data. If a second task that preempts it reads from that table, it will read a combination of some areas of newly-updated data plus some areas of data that have not yet been updated. [New Yorkers would call this a mish-mash.] These updated and old data areas together may be incorrect in combination, or may not even make sense. An example is a data table containing temperature measurements that begins with the contents 10 C. A task begins updating this table with the new value 99 F, writing into the table character-by-character. If that task is preempted in the middle of the update, a second task that preempts it could possibly read a value like 90 C or 99 C. or 99 F, depending on precisely when the preemption took place. The partially updated values are clearly incorrect, and are caused by delicate timing coincidences that are very hard to debug or reproduce consistently. An RTOSs mechanisms for communication and synchronization between tasks are provided to avoid these kinds of errors. Synchronization is essential for tasks to share mutually exclusive resources (devices, buffers, etc) and/or allow multiple concurrent tasks to be executed (e.g. Task A needs a result from task B, so task A can only run till task B produces it). Task synchronization is achieved using two types of mechanisms: Event Objects Event objects are used when task synchronization is required without resource sharing. They allow one or more tasks to keep waiting for a specified event to occur. Event object can exist either in triggered or non-triggered state. Triggered state indicates resumption of the task. Semaphores. A semaphore has an associated resource count and a wait queue. The resource count indicates availability of resource. The wait queue manages the tasks waiting for resources from the semaphore. A semaphore functions like a key that define whether a task has the access to the resource. A task gets an access to the resource when it acquires the semaphore. There are three types of semaphore: Binary Semaphores Counting Semaphores Mutually Exclusion(Mutex) Semaphores

You might also like