RTOS Fundamentals


Intro to real time theory
• In Real Time Systems, providing the result within a deadline is as important as providing the correct answer. • Here “A late answer is a wrong answer”. • This can be compared to a quiz program. • A late answer is not accepted. • A hard real time system is one in which missing a deadline can cause a great loss to life and property. • Aerospace/ space navigation systems and nuclear power plants are examples. • A soft real time system is one where the system is resilient to missing a few deadlines. • Examples are DVD players and music systems. • The user usually tolerates an occasional glitch.

Fast & hard real times
• These two definitions do not include a notion of the speed with which the system must respond to. • They describe the criticality of meeting the deadlines. • If the system has to meet the deadline in a few micro or milli seconds, then the system is called as fast real time system. • For example, watching a video on a broadband network. • The system is receiving data at a few mbps. • This is a fast real time system. • But this is not a hard deadline system because a rare miss in audio/ video is tolerated and does not make our loss of life or property. • Similarly hard realtime systems need not be fast.

4 .

we should do “Performance Analysis” during the design phase. • Thus while developing realtime software. • In desktop software. we have to ascertain that all deadlines are met before deploying the software. • In realtime software. usually ensuring correctness is sufficient. 5 .Realtime systems • This timeliness factor which distinguishes realtime software from normal application software targeted at desktop computers.

• The tight integration between the software and hardware resulted in non-portable applications. developers created software applications that included lowlevel machine code to initialize and interact with the system’s hardware directly. 6 .Brief history of RTOS • In the olden days of computing. • These systems were difficult and costly to maintain. • A small change in hardware resulted in rewriting the much of the application itself.

the evolution of OS helped shift the design of software applications from large. • In addition.History of RTOS • The so called operating system offers abstraction of the underlying hardware from the application code. 7 . interconnected applications that could run on top of the operating system environment. monolithic applications to more modular.

8 . • Multi-user access was very efficient. targeted for personal computing environment through Graphical User Interface helped residential and business users interact with the PCs. • UNIX was also ported to all types of machines from microcomputer to supercomputers.History of RTOS • In 60’s and 70’s. when mid-sized and mainframe computing was in its prime. • UNIX allowed to share these large and costly computers. • In 80’s Windows operating system. UNIX was developed to facilitate multi-user access to expensive. limitedavailability computing systems.

a basic RTOS is just small enough to provide some scheduling. • Though current RTOSes provide a huge variety of features.RTOS • An RTOS is not usually such a complex piece of software when compared to the mammoth size OSes’ currently available. memory management and a decent level of hardware abstraction. 9 .

• Software and hardware resource management • Providing OS services to applications • Abstracting the hardware from the software applications.Similarities between a Real Time Operating System and General Purpose Operating System • Some level of multitasking. 10 .

in embedded systems. • It is not uncommon to see the OS’ using 300-500 MB just for their installation.NET. . • A desktop OS has huge libraries for UI management. even 8-16MB of memory is considered to be luxury. • But. 11 . support for networking protocols and fancy features like Plug ‘n’ Play. COM/DCOM. • They also implement complex policies for network communication and facilities for binary reusability such as DLLs.OS vs RTOS • A desktop OS is usually huge in size. etc.

Special requirements of an RTOS are: • Better reliability in embedded environment (harsh) • Ability to scale up or down to meet the application needs (toy to aircraft) • Faster performance • Reduced memory needs • Scheduling policies tailored for real-time embedded systems • Support for diskless embedded systems • Better portability to different hardware platforms 12 .

which is the core supervisory software that provides minimal logic. an RTOS consists only a kernel.Kernel • In some applications. scheduling and resource management algorithms. a file system. networking protocol stacks and other components needed for a particular application. including the kernel. • An RTOS can be a combination of various modules. 13 . • Every RTOS has a kernel.

Eg timing. • Services – Operations that the kernel performs on an object. • Objects – Useful for developers to create applications for real-time embedded systems. interrupt handling and resource management.Some of the common elements of the heart of an RTOS – the kernel are: • Scheduler – A set of algorithms that determines which task executes when. 14 . Common kernel objects are tasks. semaphores and message queues.

etc. 15 .Every RTOS will provide at least the following features: • • • • • • • Task Creation / Management Scheduling Inter task Communication / Synchronization Memory Management Timers Support for ISRs Some good RTOSes provide support for TCP/IP. FTP. telenet.

• The program makes uses of the OS services. • The OS can run other programs even while our program is running. a programmer opens his IDE and types his code. • Here OS is running already and it ‘loads’ the executable program.Desktop vs RTOS • In a desktop development environment. it exits. • When the program completes. 16 . • Programs have a definite exit point and programs that contain infinite loop are considered bad. • The OS takes care of scheduling it. • Then he builds it using his compiler and then executes his program.

In fact . the code is obtained over a network on being powered up. Sometimes. In an embedded system the software written bus us.we write the software that is going to run on the target board.BSP • • • But this is not the case in most of the embedded systems. Here BSP is part of OS code in the sense that it is used by the OS to talk to different hardware in the board. the RTOS (usually in the form of some library) and a special component called the Board Support Package (BSP) is bundled into a single executable file. Thus there is no OS during the startup. 17 • • • • • • . This executable is burnt into a Boot-ROM or Flash. Usually there will be no dedicated OS already running on the target to load our programs. Usually embedded systems cannot afford the luxury of having hard disks.

Intel) and the location of the Ethernet controller and the ways to program it. it should finally talk to the Ethernet Controller on our board to finally put the data in the networking medium. • For example. 18 . • The TCP/IP package has no clue about the Ethernet controller (say. 3Com.Need for Board Support Package in Embedded Systems • The OS needs to interact with the hardware on the board. if we have a TCP/IP stack integrated with our software. • Thus BSP is highly specific to both the board and the RTOS for which it is written. • The BSP/ startup code will be the first to be executed at the start of the system.

• The next part consists of drivers required by the RTOS to use some peripherals (eg Ethernet driver. etc. These are low level stuff and are done in assembly language.).BSP code does the following: • Initialization of processor (BSP code initializes the mode of operation of the processor. • This is first part. • Coding BSP is a tough task in embedded systems as it requires mastery of both the hardware and software. Memory initialization. video. it sets various parameters required by the processor). URART. Clock setup and Setting up of various components such as cache. 19 .

which generally handle multiple inputs and outputs within tight time constraints. this scheme is inappropriate for realtime embedded applications.Various components of an RTOS Task Management • Simple software applications are typically designed to run sequentially. • However. in a pre-determined chain of instructions. • Real-time embedded software applications must be designed for concurrency. one instruction at a time. 20 .

concurrent design allows system multitasking to meet the performance and timing requirements for real-time system.Concurrency • Concurrency design requires developers to decompose an application into small. 21 . schedulable. and sequential program units. • When done correctly.

22 . • Thus a task is an independent thread of execution that can compete with other concurrent tasks for processor execution time. any other entity smaller than a task cannot compete for system resources. memory. etc are called system resources. • When we say that a task is an atomic unit. • A Task is the atomic unit of execution that can be scheduled by an RTOS to use the system resources. input / output devices. • The resources such as CPU.Task • It consists of the following: Task Creation. Task Scheduling and Task Deletion.

A typical call might look like as follows: • result = task_create(“TxTask”. It still does not have the code to execute.Task Creation • A task first need to be ‘created’. the code is in an embryonic state (Dormant). 0x4000. These parameters can be used to create a task. a task control block (TCB) would have been created by the RTOS. OS_PREEMPTABLE). 23 . 100. But by now. • if result = = OS_SUCCESS) • { // task created} • At this stage. • A task is characterized by the parameters and supporting data structures: • 1) A task name 2) Priority 3) Stack Size and 4) OS specific options.

logging task. 24 . • Eg. initialization or startup task. idle task.System tasks • When a kernel first starts. it creates its own set of system tasks and allocates the appropriate priority for each from a set of reserved priority levels. exception-handling task and debug agent task (allows debugging with a host debugger). • The system priorities are not be modified. • An application should avoid using these priority levels for its tasks because running application tasks at such level may affect the overall system performance or behavior.

• But. • The control blocks are internal to RTOS. • There is absolutely no need for a programmer to access these blocks.Control Blocks • The use of control blocks is not limited to a task. • There are control blocks for memory. synchronization. etc. a programmer need to know how these control blocks are used by the RTOS. 25 .

• An RTOS uses the TCB of a task to store all the relevant information regarding the task. • This chunk of memory is used to maintain structures such as TCB.TCB • An RTOS usually reserves a portion of available memory for itself during startup. • When a task is blocked. • The TCB will usually consists of the state of the task. 26 . so that the task will know even if it was pre-empted. the system registers are restored with these saved values. • ie the value of registers when the task was preempted earlier. the value of the registers is saved in its context in TCB. its priority and its RTOS specific parameters (say scheduling policy). • When a task is scheduled again.

the task gets blocked. In this case. • Running – The task is currently using the CPU. But it cannot do so currently if a higher priority task is being executed.Task states • A task can be in one of the following states: 1) Dormant 2) Ready 3) Running and 4) Blocked. it usually arrives in the ready state. but not added to RTOS for scheduling. • Dormant – When the task is created. • Ready – When a task is added to the RTOS for scheduling. the task that is running is pre-empted and the highest priority task is scheduled for execution. When a task is running and if another higher priority task becomes ready. • Blocked – During course of execution of a task. if the resource / input is not immediately available. it may require a resource or input. 27 .

granular states such as suspended. 28 .Granular states • Some commercial kernel such as the VxWorks kernel. • The suspended state exists for debugging purposes. pended and delayed. • Pended and delayed are sub-states of blocked state. • A delayed task is waiting for a timing delay to end. • A pended task is waiting for a resource that it needs to be freed. define other.

Different Task States 29 .

the kernal’s scheduler uses the priority of each task to determine which task to move to the running state. • But it can move to the running state. • The tasks in this state cannot move directly to the blocked state. the kernel puts it into the ready state. • Here the task actively competes with all other ready task for the processor’s execution time. 30 .Ready State • When a task is first created and made ready to run. • Because many tasks might be in the ready state.

31 . • Some kernals maintain a separate task-ready list for each priority level. allowing many more tasks in an application. • Then the scheduling algorithm is quite complicated and involves maintaining a taskready list.Ready state • Many kernals support more than one task per priority level. • A kernel uses this list to move tasks from the ready state to the running state. others have one combined list.

Running State • In a single-processor system. • This state can move to the blocked state . 32 . • When a pre-empted task is put in the appropriate. • The processor can then execute the task’s instructions and manipulate the associated stack. • When a task is moved to the running state. the processor loads its registers with this task’s context. and the higher priority task is moved from the ready state to the running state. priority-based location in the task-ready list. only one task can run at a time.

33 . • CPU starvation occurs when a higher priority task use all the CPU execution time and lower priority tasks do not get to run. CPU starvation can result. if the unblocked state is the highest priority task. • If higher priority tasks are not designed to block.Blocked state • The possibility of blocked states is very important in real-time systems because without blocked states. the task might move from the blocked state to the ready state if it is not the highest priority task. • However. • The preempted task is moved to the ready state and put into the appropriate priority-based location in the task-ready list. • When a task becomes unblocked. the task moves directly into the running state (without going to ready state) and preempts the currently running task. lower priority tasks could not run. • The task is then put into the task-ready list at the appropriate priority based location.

Idle task • What happens if no task is ready to run and all of them are blocked? • The RTOS will be in trouble. • In RTOS. it may reserve the lowest 10 and the highest 10. if an RTOS can provide 256 tasks. leaving the user with 236 tasks of priorities in the range (10-246). 34 . • } • This has no system calls. • An idle task does nothing. • void IdleTask (void) • { • while(1). • For example. • So. the idle task has the lowest priority. • It has no code except an infinite loop. • Many RTOS’ reserve a few lowest and highest priority tasks for themselves. an RTOS will usually execute a task called idle task.

• Is an ISR also a task? No. 35 . we can use it to determine the CPU loading – the average utilization ratio of the CPU. • A task is a standalone executable entity. • But some new RTOSes model ISR as high priority threads. • This can be done by making the idle task writing the system clock in some memory location whenever it gets scheduled.CPU loading • Though an idle task does nothing. • An ISR is a routine that is called by system in response to an interrupt event.

we will have an order of priority and schedule timeline for executing these tasks. a task is defined as the program in execution and the related information maintained by the OS for the program. • The terms ‘Task’. • Task is also known as ‘Job’ in the OS context.Tasks. • A program or part of it in execution is also called a ‘Process’. 36 . • In addition. ‘Job’ and ‘Process’ refer to the same entity in the Os context and most often they are used interchangeably. process and threads • The term ‘task’ refers to something that needs to be done. • In the OS parlance.

• A process needs various system resources like CPU for executing the process. in execution. etc. 37 . I/O devices for information exchange. • Multiple instances of the same program can execute simultaneously. • A process is sequential in execution.Process • A ‘Process’ is a program or part of it. memory for storing the code corresponding to the process and associated variables.

• A process mimics a processor in properties and holds a set of registers. 38 . a Program Counter (PC) to point to the next executable instruction of the process. process status.Structure of a process • The concept of ‘Process’ leads to concurrent execution of tasks and the efficient utilization of the CPU and other system resources. a stack for holding the local variables associated with the process and the code corresponding to the process.

39 .

namely. its registers and the program counter register becomes mapped to the physical registers of the CPU. • The memory occupied by the process is segmented into 3 regions. 40 . Stack memory.Virtual Processor • A process which inherits all the properties of the CPU can be considered as a virtual processor. • When the process gets its turn. Data Memory and Code memory. awaiting its turn to have its properties switched into the physical processor.

Virtual processor • The stack memory holds all temporary data such as variables local to the process. 41 . a specific area of memory is allocated for the process. • On loading a process into the main memory. • The code memory contains the program code corresponding to the process. • Data memory holds all the global data for the process.

Memory organization of a process 42 .

Process Management • Process management deals with the creation of a process. 43 . setting up the memory space for the process. setting up a Process Control Block for the process and process termination / deletion. allocating system resources. loading the process’s code into the memory space.

code memory and heap memory area. • Meaning they share the data memory. • Threads maintain their own thread status (CPU register values). share the same address space.Threads • A thread is the primitive that can execute code. 44 . • A thread is a single sequential flow of control within a process. • A process can have many threads of execution. which are part of a process. Program Counter (PC) and stack. • Thread is also known as light weight process. • Different threads.

Memory organization of a process and its thread 45 .

if the process is waiting for a user input. • For example. the CPU utilization may not be efficient. and the process execution also enters a wait state. updating some I/O devices etc. 46 .Multithreading • A process / task in embedded application may be a complex or lengthy one and it may contain various sub operations like getting input form I/O devices connected to the processor. • If all the sub functions of a task are executed in sequence. performing some internal calculations / operations. the CPU enters the wait state for the event.

• The multithreaded architecture of a process can be visualized with the thread-process diagram as shown below. 47 .Multithreading • Instead of this single sequential execution of the whole process. another thread which do not require the I/O event for their operation can be switched into execution. if the task / process is split into different threads carrying out the different sub functionalities of the process. • This leads to speedy execution of the process and the efficient utilization of the processor time and resources. the CPU can be effectively utilized and when the thread corresponding to the I/O operation enters the wait state.

48 .

when one thread enters a wait state. This also reduces the complexity of inter thread communication since variables can be shared across the threads. Since the process is split into different threads. Multiple threads of the same process share the address space for data memory. Efficient CPU utilization. which executes a portion of the process. The advantages of multithreading are: Better memory utilization. while the other thread is waiting. This speeds up the execution of the process. The CPU is engaged all time. for processing. there will be a main thread and rest of the threads will be created within the main thread.Advantages of Multithreading • • • • • If the process is split into multiple threads. • • 49 . the CPU can be utilized by other threads of the process that do not require the event.

50 .

is called as multiprogramming. • Multiprocessor systems possess multiple CPUs and can execute multiple processes simultaneously. 51 . • The ability of an OS to hold multiple processes in memory and switch the processor (CPU) from one process to another process is known as multitasking. • The ability of an OS to have multiple programs in memory. which are ready for execution. • In a uniprocessor system.Multiprocessing and multitasking • Multiprocessing describes the ability to execute multiple processes simultaneously. it is possible to achieve some degree of pseudo parallelism in the execution of multiple processes by switching the execution among different processes.

current context of execution are saved and used later when the CPU executes the process again. • During CPU switching. 52 .Context switching • In a multitasking environment. • The process of retrieving the saved context is called as ‘Context Retrieval’. when task / process switching happens. • The switching of the virtual processor to physical processor is controlled by the scheduler of the OS kernel. • The act of saving the context is called as ‘Context Saving’. the virtual processor gets its properties converted into that of the physical processor. • This is known as context switching.

he throws only one ball and catches only one per hand. • At any point of time. • However. rings. • The juggler uses a number of objects (balls. etc) and throws them up and catches them.Toss Juggling • The skilful object manipulation game is a real world example for multitasking illusion. the speed at which he is switching the balls for throwing and catching creates the illusion. he is throwing and catching multiple balls to spectators. 53 .

54 . the other tasks may have to wait for a long time to get the CPU. it is known as co-operative multitasking. • If the currently executing task is non-cooperative. • Co-operative Multitasking • It is the most primitive form of multitasking in which a task/process gets a chance to execute only when the currently executing task / process voluntarily relinquishes the CPU.Co-operative Multitasking • There are various types of multitasking existing in the operating system’s context. • Here any task / process can hold the CPU as much time as it wants. It involves the mercy of the tasks for getting the CPU time for execution.

55 . • The preemption of task may be based on time slots or task / process priority. • When and how much time a process gets is dependent on the implementation of the preemptive scheduling. • Here the currently running task/ process is preempted to give a chance to other tasks / processes to execute.Preemptive multitasking • It ensures that every task / process gets a chance to execute.

waiting for an I/O system resource. 56 . the currently executing task / process need not relinquish the CPU when it enters the ‘Blocked / Wait’ state. • In the co-operative multitasking.Non-preemptive multitasking • Here. waiting for an I/O. is allowed to execute until it terminates or enters the ‘Blocked / Wait’ state. • The co-operative and non-preemptive multitasking differs in their behavior when they are in the ‘Blocked / Wait’ state. or a shared resource access or an event to occur whereas in non-preemptive mutli tasking the currently executing task relinquishes the CPU when it waits for an I/O or system resource or an event to occur. the process / task which is currently given the CPU time.

57 . • Determining which task / process is to be executed at a given point of time is known as task / process scheduling. is known as ‘Scheduler’.Scheduler • There should be some mechanism in place to share the CPU among the different tasks and to decide which process / task is to be executed at a given point of time. • The scheduling polices are implemented in an algorithm and it is run by the kernel as a service. • The kernel service / application which implements the scheduling algorithm.

It should always be higher.Selection of Scheduling algorithm • This depends on the following factors: • CPU utilization – The scheduling algorithm should always make the CPU utilization high. 58 . • Throughput – It gives an indication of the number of processes executed per unit of time. It should be as least as possible. • Response time – It is the time elapsed between the submission of a process and the first response. It is a direct measure of how much percentage of the CPU is being utilized.

• Waiting time – It is the amount of time spent by a process in the ‘Ready’ queue waiting to get the CPU time for execution. time spent in the ready queue. • It should be minimal for a good scheduling algorithm.Selection of Scheduling algorithm • Turnaround time – It is the amount of time taken by a process for completing its execution. • It includes the time spent by the process for waiting for the main memory. and the time spent in execution. time spent on completing the I/O operations. 59 . • It should be minimal for a good scheduling algorithm.

60 . which are ready for execution and waiting for CPU to get their turn for execution.Queues for scheduling • The various queues maintained by OS along with CPU scheduling are: • Job Queue: It contains all the processes in the system. • Deviec Queue: It contains the set of processes. • Ready Queue: It contains all the processes. which are waiting for an I/O device. It is empty when there is no process ready for running.

This scheduling policy ensures that important tasks are handled first and the less important ones later. • If a task Ti runs. Here at any time. There are many scheduling policies. But this is not the preferred one in a desktop. . The most used ones are: • Strictly pre-emptive scheduling • It is one of the most widely used scheduling policies in an RTOS. • For example. in an aircraft cruise control system. only the highest priority task that is ready to run executes. The programmer can assign priorities to various tasks and rest assured that the RTOS would do the needed scheduling.Task scheduling – pre-emptive scheduling • Task scheduling is one of the important reason for choosing an RTOS. it means that all tasks Tj with priorities lesser than Ti are blocked. the flight controller task will have more priority then a task 61 that controls the air-conditioning system.

• Note that in almost all systems.Pros and cons of pre-emptive scheduling • Pros: Once the priorities are set properly. ISRs will have highest priority irrespective of the priorities assigned to the task. • Cons: It is possible that one or more of the lower priority tasks do not get to execute at all. we can rest assured that only the important things are handled first. 62 . Hence to avoid this proper analysis should be done at design phase.

Each task gets a fraction of CPU time. This kind of kernel is easy to implement. . we will know exactly the time after which the task will be scheduled (if the number of tasks in the system do not vary in time). if two or more tasks have same priority. There is no notion of priority here. we can make the scheduler use time slicing for those tasks with the same priority. It is not used in its original form. This scheduling is also known as round robin scheduling. In a pre-emptive system. The pre-emption time of a task is deterministic ie if a task is pre-empted. • Pros: No need of complex analysis of the system. • Cons: This is a very rigid scheduling policy ie there is no 63 notion of priority. It can be used in conjunction with pre-emptive scheduling.Time slicing • • • • • • • Here the CPU time is shared between all the tasks.

in which a lower priority task may not get an opportunity to execute. • Though some kind of priority mechanism could be incorporated here. • Unlike pre-emptive scheduling. which has not executed for some period will gradually be increased by the RTOS and will finally get a chance to execute. • Priority of a task. here every task will be given a ‘fair’ chance to execute. 64 .Fairness Scheduling • Here every task is given an opportunity to execute. it is not strict.

• We listen to music while compiling our programs. • How to vary priority of tasks so that fairness is achieved is itself a tough task? • And it does not fit right in real time systems. • Pros: Every task will get an opportunity to execute. • Cons: Introduces non determinism into the system. • This kind of scheduling is widely used in desktop OS. 65 .Pros and cons of fairness scheduling • This scheduling policy is complex.

Non-preemptive Scheduling • Here the currently executing task / process is allowed to run until it terminates or enters the ‘wait’ state waiting for an I/O or system resource. The various types of nonpreemptive scheduling algorithms are: • First-Come-First Server (FCFS) / FIFO scheduling • Here the algorithm allocated CPU time to the processes based on the order in which they enter the ‘Ready’ queue. • The first entered process is serviced first. 66 . It is same as any real world application where queue systems are used. • Eg Ticketing reservation system where people need to stand in a queue and the first person standing in the queue is serviced first. FCFS is also known as First In First Out (FIFO).

• If the process contains any I/O operation. FCFS favors CPU bound processes and I/O bound processes may have to wait until the completion of the CPU bound process. continues it execution until it finishes its task. 67 . the CPU is relinquished by the process. which does not contain any I/O operation. if the currently executing process is a CPU bound process. • In general.Drawback of FCFS • The major drawback here is that it favors monopoly of process. • A process. • The average waiting time is not minimal for FCFS scheduling algorithm. • This leads to poor device utilization.

• It is also not optimal and possesses the same drawback as that of FCFS algorithm. 68 .LCFS / LIFO • The Last Come First Served scheduling algorithm also allocates CPU time to the processes based on the order in which they are entered in the ‘Ready’ state queue. • The last entered process is serviced first. which is put last into the ‘Ready’ queue is serviced first. • LCFS scheduling is also known as LIFO where the process.

Shortest Job First
• The algorithm ‘sorts’ the ‘Ready’ queue each time a process relinquishes the CPU (either the process terminates or enters the ‘Wait’ state for I/O system or system resource) to pick the process with shortest (least) estimated completion / run time. • If SJF, the process with the shortest estimated runtime is scheduled first, followed by the next shortest process, and so on.

Drawback of SJF
• The average waiting time for a given set of process is minimal in SJF scheduling and so it is optimal compared to other non-preemptive scheduling like FCFS. • The main drawback of SJF algorithm is that a process whose estimated execution completion time is high may not get a chance to execute if more and more processes with least estimated execution time enters ‘Ready’ queue before the process with longest estimated execution time started its execution (in pre-emptive SJF). • This condition is known as ‘Starvation’. • Another drawback of SJF is that it is difficult to know in advance the next shortest process in the ‘Ready’ queue for scheduling since new processes with different estimated execution time keep entering the ‘Ready’ queue at any point of time.

Priority Based Scheduling
• The Turn Around Time (TAT) and waiting time for processes in non-preemptive scheduling varies with the type of scheduling algorithm. • Priority based non-preemptive scheduling algorithm ensures that a process with high priority is serviced at the earliest compared to other low priority processes in the ‘Ready’ queue. • The priority of a task / process can be indicated by many mechanisms. • In the SJF algorithm, each task is prioritized in the order of the time required to complete the task. • The lower the time required for completing a process the higher is its priority.

while creating a task. • Here 0 indicates highest priority and 255 indicates the lowest priority. • The non-preemptive priority based scheduler sorts the ‘Ready’ state queue based on priority and picks the process with the highest level of priority for execution.Priority in Windows CE • Another way of priority assigning is associating a priority to the task / process at the time of creation of the task / process. 72 . • Windows CE supports 256 levels or priority.

• Similar to SJF scheduling algorithm, non-preemptive priority based algorithm also possess the drawback of ‘Starvation’ where a process whose priority is low may not get a chance to execute if more and more processes with higher priorities enter the ‘Ready’ queue before the process with lower priority started its execution. • Starvation can be tackled in priority based nonpreemptive scheduling by dynamically raising the priority of the low priority task which is under starvation. • This technique is known as ‘Aging’.


Preemptive Scheduling
• In preemptive scheduling, every task in the ‘Ready’ state gets a chance to execute. • When and how often each process gets a chance to execute (gets the CPU time) is dependent on the type of preemptive scheduling algorithm used for scheduling the processes. • Here the scheduler can pre-empt (stop temporarily) the currently executing task and select another task from the ‘Ready’ queue for execution. • When to pre-empt a task and which task is to be picked up from the ‘Ready’ queue for execution after preempting the current task is purely dependent on the scheduling algorithm. • A task which is preempted by the scheduler is moved to the ‘Ready’ queue. • The act of moving a ‘Running’ process into the ‘Ready’ queue by the scheduler, without the processes requesting for it is known as ‘Preemption’. • Two important approaches used in preemptive scheduling are 74 time-based and priority based preemption.

Preemptive SJF scheduling / Shortest Remaining Time (SRT)
• The non-preemptive SJF scheduling algorithm sorts the ‘Ready’ queue only after completing the execution of the current process or when the process enters ‘Wait’ state, whereas the preemptive SJF scheduling algorithm sorts the ‘Ready’ queue when a new process enters the ‘Ready’ queue and checks whether the execution time of the new process is shorter than the remaining of the total estimated time for the currently executing process.

76 . • Preemptive SJF is also known as Shortest Remaining Time (SRT) scheduling. the currently executing process is preempted and the new process is scheduled for execution. • Thus preemptive SJF scheduling always compares the execution completion time of a new process entered the ‘Ready’ queue with the remaining time for completion of the currently executing process and schedules process with the shortest remaining time for execution.Preemptive SJF scheduling / Shortest Remaining Time (SRT) • If the execution time of the new process is less.

• In round robin scheduling. each process in the ‘Ready’ queue is executed for a pre-defined time slot. • This is repeated for all the processes in the ‘Ready’ queue. • It is executed for a pre-defined time and when the pre-defined time elapses or the process completes (before the pre-defined time slice). • In the round robin league each team in the group gets an equal chance to play against the rest of the teams in the same group whereas in the ‘Knock out’ league the losing team in a match moves out of the tournament. • The execution starts with picking up the first process in the ‘Ready’ queue. 77 .Round Robin Scheduling • The term Round Robin is very popular among the sports and games activities. the next process in the ‘Ready’ queue is selected for execution.

78 . • The ‘Ready’ queue can be considered as a circular queue in which the scheduler picks up the first process for execution and moves to the next till the end of the queue and then comes back to the beginning of the queue to pick up the first process.RR drawbacks • Once each process in the ‘Ready’ queue is executed for the pre-defined time period. Round Robin scheduling is similar to FCFS scheduling and the only difference is that a time slice based preemption is added to switch the execution between the processes in the ‘Ready’ queue. the scheduler comes back and picks the first process in the ‘Ready’ queue again for execution. • The sequence is repeated.

• If a process terminates before the elapse of the time slice. • Round robin scheduling ensures that each process gets a fixed amount of CPU time for execution. the process releases the CPU voluntarily and the next process in the queue is scheduled for execution by the scheduler. 79 .More on Round Robin Scheduling • The time slice of a kernel varies in order of a few microseconds to milliseconds. • Certain OS kernels allow the time slice as user configurable.

80 .Priority based scheduling • Priority based preemptive scheduling algorithm is same as that of the non-preemptive priority based scheduling except for the switching of execution between tasks. any high priority process entering the ‘Ready’ queue is immediately scheduled for execution whereas in the non-preemptive scheduling any high priority process entering the ‘Ready’ queue is scheduled only after the currently executing process completes its execution or only when it voluntarily relinquishes the CPU. • In preemptive scheduling.

81 . • Most of the RTOSs make use of the preemptive priority based scheduling algorithm for process scheduling. • Thus priority based preemptive scheduling is adopted in systems which demands ‘Real Time’ behavior. • Preemptive priority based scheduling also suffers from ‘Starvation’. • This can also be eliminated by ‘Aging’ technique.Priority based scheduling • Priority based preemptive scheduling gives Real-Time attention to high priority tasks.

Task Synchronization • So far we talked about many tasks executing in the RTOS. • Let us explain this with a real-time example. 82 . • In all but trivial systems. • This can be compared with two roads that run parallel to each other and hence do not meet. they do not share any resources between them. • ie they must synchronize and communicate with each other. • When the tasks are independent • ie there is no communication between the tasks. these tasks need to interact with each other.

• There is no need for this synchronization either before or after this region. • The traffic signal is needed only at the region of intersection. there is a shared region between the roads. • But the case becomes difficult when we have two intersecting roads.Task synchronization • Vehicles can ply on these roads safely without colliding with ones in the parallel road. because. • So the traffic on the two roads need explicit synchronization. 83 . • This situation is different. • Here we need an explicit mechanism like a traffic signal to make sure that the vehicles ply without getting into any mishaps.

Task synchronization methods • There are two ways of achieving synchronization with the tasks: • Task synchronization using mutexes. • And let task B want to print the alphabets: A B C. • If these tasks are scheduled in a round robin (time slicing) method. • Task synchronization using semaphores. 84 . • Task synchronization using mutexes • Problems occur only when a resource is shared among tasks and synchronization needs to be done only during resource acquisition and release. Let task A want to print the numbers: 1 2 3. then the printout will be: 1 2 A B 3 C ( or any other junk). • For example. consider two tasks that want to share a printer.

• print (2). • It is a mechanism to exclude other tasks to use a resource when a specific task has acquired it. task A can be coded as: • // Task A code • // …. • print (1). • print (3) . we need to use a mutex – short name for Mutual Exclusion. • mutex_release (printer_mutex) .Mutex • The solution to this problem is that one of the tasks can acquire the printer resource. • Mutex_acquire (printer_mutex). • For example. • To implement this solution. 85 . use it and then release it.

they first try to acquire the mutex.More on mutex • • • • • • • • • Similarly. print (‘A’). the task. print (‘C’) . which makes the first attempt will acquire it. 86 . • Since. At any point of time if both the tasks want to use the printer. mutex_release (printer_mutex) . we are considering only a single processor model. task B can be coded as : // Task B code // … Mutex_acquire (printer_mutex). print (‘B’) .

• print (1). let task B also want to print something.Pre-empted here • print (2). • Let us now consider the task B has a higher priority and it gets scheduled after print (1).Example for mutex • Let us consider a case where task A has acquired the printer_mutex. since task A has already acquired the mutex. • It will now try to acquire the printer_mutex. • But it cannot. • . • And now. 87 . • // Task A code • // … • mutex_acquire (printer_mutex).

• It then releases the mutex. print (‘B’). the execution would shift to task B immediately after task A releases the mutex. • We should remember that if task B is a higher priority task. Ie we say B is blocked on resource. • Since task B is blocked. task A gets to resume again and completes its printing.Mutex • • • • • • // Task B code // … mutex_acquire (printer_mutex). --. task B can resume and continue with its printing. Now. The task B will be blocked now. 88 .Blocked here print (‘A’).

the execution will be transferred to task ‘B’ immediately after execution of mutex_release. my_foo( ). mutex_release (printer_mutex). 89 .More on mutex • • • • • Consider the following code of task A: // … print (3). • Statement my_foo will be executed only after task A is scheduled again. //some other function called from task A • In a truly pre-emptive system.

Psuedo code for mutex • • • • • • • • • • • • • • • • # include <my_os.h> // The printer mutex is a global variable so that both the tasks can access it mutex printer_mutex.. // … } void TaskA (Parms) { } void TaskB (Params) { } 90 . task_create (TaskA ) . int main( ) { // . task_create (TaskB ) ..

• A task can be a producer and a consumer at the same time. 91 . • Let us consider a case where two tasks are writing into contiguous memory locations and another task uses these values produced by the tasks. the first two tasks that generate the values are called ‘producers’ and the one that uses these values is called the ‘consumer’. • In concurrent programming language.Race conditions • Mutexes are also required when two tasks share data using global variables.

ptr is a global variable and both the tasks P1 and P2 can access it. the contents of memory location 0x4000 will be changed to 12. 92 .Race conditions • • • • • • • • Let us use a pointer ptr to write into the list. Assume that ptr points to memory location 0x4000. we can use *prt = 8. To write. Now. Consider the following situation: P1 reads the value of ptr. After P1 reads the value of ptr from the memory. it gets pre-empted by P2 that writes say ptr = 12.

• Now the contents of P2 are lost when P1 writes 8 into same memory location. • This belongs to one of the worst categories of bugs – nonrepeatable bugs. • This condition where data consistency is lost because of lack of synchronization of the component tasks of a system is called a ‘race condition’. 93 . shared global variables must be used only with synchronization.Race conditions • Before P2 increments the pointer. task P1 is scheduled again. • To avoid this problem.

if a higher priority task is blocked because of some lower priority task. only the task with the highest priority executes. at any point of time. • But due to some reasons.Priority inversion • It is one of the issues that must be addressed during the analysis and design of realtime systems. then a ‘Priority Inversion’ is said to have occurred. • In a pre-emptive system. • It can happen in two ways: Bounded priority inversion and Unbounded priority inversion. 94 .

But.Bounded priority inversion • • • • • • • • • • Let us consider a system with two tasks A (TA) and B (TB). Once the mutex is released. Now. Now. After sometime. TA begins execution. TB runs till it completes its critical section code and releases the mutex. it cannot acquire the mutex because. After sometime. Because of this TA is blocked. before TB gets to finish its critical section code. it has already been acquired by TB. let TB acquire a mutex corresponding to a resource shared between TA and TB. TA tries to acquire the mutex for the resource shared between TA and TB. Let initially TA be executing and after sometime. TA gets blocked and TB scheduled. 95 . Let priority of TA be higher than that of TB. TA gets scheduled (since TA’s priority is higher).

• How long is TA blocked? • The answer is.Bounded priority inversion • Here we see that TA gets blocked for a period. 96 . TA will be blocked for the period equal to the critical section of TB (ie if TB is preempted immediately after acquiring the mutex). • So. in this case priority inversion is said to have occurred. because of lower priority than task TB in acquiring a shared resource. in worst case.

• The worst case is that the priority inversion occurs for the period equal to complete TB critical section. TB is pre-empted here // critical section code mutex_release (my_mutex). 97 . Here the period for which the priority inversion occurs is ‘bounded’. • So this is called ‘bounded priority inversion’.More on bounded priority inversion • • • • • • // Task B code mutex_acquire (my_mutex).

• The task Tc acquires the mutex for the resource shared 98 between Ta and Tc and enters the critical region. • Let us consider a system with 3 tasks Ta. Tc in decreasing order of priority (Ta has the highest priority). . • Here higher priority task will not be able to provide its services for a unknown period of time. • Initially assume that the highest priority task Ta is running and gets blocked. • This could cause failure of the entire system. • Now Tc starts running.Unbounded priority inversion • This is dangerous than bounded priority inversion method. • Assume Tb is also blocked due to some reason. • It is a case when the time for which priority inversion is unbounded ie we cannot fix how long the priority inversion will occur. Tb.

Tc is still blocked and cannot release the mutex required. . we cannot say how long it will be before the lower priority task releases the resource needed by higher priority task. • Unlike the previous case. • After sometime. • Now. 99 • So this is called as ‘unbounded priority inversion’. Tb starts running. Ta tries to acquire the mutex for the shared resource. • But. which gets pre-empted again by task Ta. • Once Ta gets blocked. • We will have to wait for the intermediate priority task(s) to complete before the lower priority task will release the resource.Unbounded priority inversion • Now it gets preempted by Tb. Tc had already taken the mutex.

Preventing priority inversion • There are 2 schemes to avoid priority inversion. • This is done so that the priority of a lower priority task is boosted in such a way that the priority inversion gets bounded. • This method needs the support of an OS. 100 . • They are: 1) Priority Inheritance Protocol (PIP) 2) Priority Ceiling Protocol (PCP). • Priority Inheritance Protocol (PIP) • Here the priority of a task using a shared resource shall be made equal to the priority of the highest priority task that is blocked for the resource at the current instant.

let T2 get scheduled and get blocked for the resource R. • Immediately priority of T7 is boosted to 2. • Later. its previous priority is restored so that the actual high priority task can continue its execution and enter its critical section. 101 . • Let 10 tasks.Example • Consider an RTOS where 1 is the highest priority. T7 with boosted priority will be able to complete its critical section and release the mutex. with priorities 1-10 execute in the system. • Once the mutex is released. • Let us consider that the tasks with priority 2 and 7 share a resource R. • Now. • Let T7 (lower priority) acquire the shared resource R and get pre-empted by task T5 before releasing it.

T3 1 102 • R2 T2. • Example • Let us assume that the system has three tasks and two resources that are shared. which wants to use the resource will acquire the priority associated with the resource. T3 2 .Priority Ceiling Protocol • Here we assign a priority with the resource during design time. any task. T2. • Resource Sharing Tasks Priority • R1 T1. • During runtime. T2. • The priority associated with the resource is the priority of the highest priority task associated with the resource. 2 and 3. • Let tasks T1. • Let us assume that 1 is the highest priority in the system. • Then we can form a table mapping resources to tasks. T3 have priorities 1.

it sets its priority to 1 and then access the resource. • We use the priority changing mechanism provided by the RTOS. • After using the resource. the task restores its own priority. task T1 cannot pre-empt T3 because its priority has increased to 1. • Now. • There is no mutex / semaphore is required. 103 . • It does not need any support from OS. then it has to do the following: • Say. if T3 wants to use R1.Priority ceiling protocol • Any task that wants to use R1.

Manual set/reset of priorities: In it original form (without mutexes).Disadvantages of Priority ceiling protocol • • • This method is manual ie the priority is associated with the resource manually. after using the resource. • • 104 . maintaining priorities associated with resources can be error prone. PCP will fail if we mix pre-emptive and time-slicing. Time slicing not allowed: While using PCP. if tasks do not reset their priority it could cause havoc. So for large systems. we have to adopt only a strict pre-emptive scheduling policy.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer: Get 4 months of Scribd and The New York Times for just $1.87 per week!

Master Your Semester with a Special Offer from Scribd & The New York Times