You are on page 1of 37

UNIT-5

Embedded/Real-Time OS Concepts
 Architecture of Kernel
 Task and Task Scheduler
 Context Switching
 Scheduling Algorithms
 EDF and Rate Monotonic
 Interrupt Service Routine
 Memory Management
 Priority Inversion Problem
 Priority inheritance
 Embedded OS
 Handheld OS

Kernel in Operating System


Kernel is central component of an operating system that manages operations of
computer and hardware. It basically manages operations of memory and CPU time. It
is core component of an operating system. Kernel acts as a bridge between
applications and data processing performed at hardware level using inter-process
communication and system calls. 
Kernel loads first into memory when an operating system is loaded and remains into
memory until operating system is shut down again. It is responsible for various tasks
such as disk management, task management, and memory management. 
It decides which process should be allocated to processor to execute and which
process should be kept in main memory to execute. It basically acts as an interface
between user applications and hardware. The major aim of kernel is to manage
communication between software i.e. user-level applications and hardware i.e., CPU
and disk memory. 
Objectives of Kernel: 
 To establish communication between user level application and
hardware.  
 To decide state of incoming processes.  
 To control disk management.  
 To control memory management. 
 To control task management. 
A kernel is a central component of an operating system. It acts as an interface
between the user applications and the hardware. The sole aim of the kernel is to
manage the communication between the software (user level applications) and the
hardware (CPU, disk memory etc). The main tasks of the kernel are :

 Process management
 Device management
 Memory management
 Interrupt handling
 I/O communication
 File system...etc.
Types of Kernel : 
1. Monolithic Kernel  – 
It is one of types of kernel where all operating system services operate
in kernel space. It has dependencies between systems components. It
has huge lines of code which is complex. 

Example : 
 Unix, Linux, Open VMS, XTS-400 etc.
 Advantage : 
It has good performance. 
Disadvantage : 
It has dependencies between system component and lines of code in
millions. 
2. Micro Kernel – 
It is kernel types which has minimalist approach. It has virtual memory and
thread scheduling. It is more stable with less services in kernel space. It puts
rest in user space. 
Example : 
 Mach, L4, AmigaOS, Minix, K42 etc.
 Advantage : 
It is more stable. 
 Disadvantage : 
There are lots of system calls and context switches. 
2. Hybrid Kernel – 
It is the combination of both monolithic kernel and microkernel. It has speed
and design of monolithic kernel and modularity and stability of microkernel. 
Example : 
 Windows NT, Netware, BeOS etc.
 Advantage : 
It combines both monolithic kernel and microkernel. 
Disadvantage : 
It is still similar to monolithic kernel. 
4. Exo Kernel – 
It is the type of kernel which follows end-to-end principle. It has fewest
hardware abstractions as possible. It allocates physical resources to
applications. 
Example : 
 Nemesis, ExOS etc.
 Advantage : 
It has fewest hardware abstractions. 
Disadvantage : 
There is more work for application developers. 
5. Nano Kernel – 
It is the type of kernel that offers hardware abstraction but without system
services. Micro Kernel also does not have system services therefore the
Micro Kernel and Nano Kernel have become analogous. 
Example : 
 EROS etc.
 Advantage : 
It offers hardware abstractions without system services. 
 Disadvantage : 
It is quite same as Micro kernel hence it is less used. 
2.Task and Task Scheduler

 What is a task and various states that a task can lie


in embedded environment.
2.1 Tasks
 Task is a piece of code or program that is separate from another task and can be
executed independently of the other tasks.
 In embedded systems, the operating system has to deal with a limited number
of tasks depending on the functionality to be implemented in the embedded
system.
Multiple tasks are not executed at the same time instead they are executed in
pseudo parallel i.e. the tasks execute in turns as the use the processor.
 
From a multitasking point of view, executing multiple tasks is like a single book
being read by multiple people, at a time only one person can read it and then
take turns to read it. Different bookmarks may be used to help a reader identify
where to resume reading next time.
 An Operating System decides which task to execute in case there are multiple
tasks to be executed. The operating system maintains information about every
task and information about the state of each task.
 The information about a task is recorded in a data structure called the task
context. When a task is executing, it uses the processor and the registers
available for all sorts of processing. When a task leaves the processor for
another task to execute before it has finished its own, it should resume at a later
time from where it stopped and not from the first instruction. This requires the
information about the task with respect to the registers of the processor to be
stored somewhere. This information is recorded in the task context.
2.2Task States
In an operation system there are always multiple tasks. At a time only one task
can be executed. This means that there are other tasks which are waiting their
turn to be executed.
 Depending upon execution or not a task may be classified into the following
three states:
 Running state - Only one task can actually be using the processor at a given
time that task is said to be the “running” task and its state is “running state”. No
other task can be in that same state at the same time
 Ready state - Tasks that are are not currently using the processor but are ready
to run are in the “ready” state. There may be a queue of tasks in the ready state.
 Waiting state - Tasks that are neither in running nor ready state but that are
waiting for some event external to themselves to occur before the can go for
execution on are in the “waiting” state.
 
A transition of state between the ready and running state occurs whenever the
operating system selects a new task to run.
 The task that was previously in running state becomes ready and the new task
is promoted to running state.
 A task will leave running state only if it needs to wait for some event external
to itself to occur before continuing.
 A task's state can be defined as follows:
enum TaskState { Ready, Running, Waiting };
3 SCHEDULER
The heart and soul of any operating system is its scheduler.
 This is the piece of the operating system that decides which of the ready tasks
has the right to use the processor at a given time.
 It simple checks to see if the running task is the highest priority ready task.
 Some of the more common scheduling algorithms:

Rate-monotonic scheduling
 Last Updated : 13 May, 2020

 Read

 Discuss
Rate monotonic scheduling is a priority algorithm that belongs to the static priority
scheduling category of Real Time Operating Systems. It is preemptive in nature. The
priority is decided according to the cycle time of the processes that are involved. If the
process has a small job duration, then it has the highest priority. Thus if a process with
highest priority starts execution, it will preempt the other running processes. The
priority of a process is inversely proportional to the period it will run for.
A set of processes can be scheduled only if they satisfy the following equation :
Where n is the number of processes in the process set, Ci is the computation time of
the process, Ti is the Time period for the process to run and U is the processor
utilization.
Example:
An example to understand the working of Rate monotonic scheduling algorithm.
Processe Execution Time Time period
s (C) (T)

P1 3 20

P2 2 5

P3 2 10
n( 2^1/n - 1 ) = 3 ( 2^1/3 - 1 ) = 0.7977

U = 3/20 + 2/5 + 2/10 = 0.75


It is less than 1 or 100% utilization. The combined utilization of three processes is less
than the threshold of these processes that means the above set of processes are
schedulable and thus satisfies the above equation of the algorithm.
1. Scheduling time –
For calculating the Scheduling time of algorithm we have to take the LCM of the
Time period of all the processes. LCM ( 20, 5, 10 ) of the above example is 20.
Thus we can schedule it by 20 time units.
2. Priority –
As discussed above, the priority will be the highest for the process which has the
least running time period. Thus P2 will have the highest priority, after that P3 and
lastly P1.
P2 > P3 > P1
3. Representation and flow –
Above figure says that, Process P2 will execute two times for every 5 time units,
Process P3 will execute two times for every 10 time units and Process P1 will
execute three times in 20 time units. This has to be kept in mind for understanding
the entire execution of the algorithm below.

Process P2 will run first for 2 time units because it has the highest priority. After
completing its two units, P3 will get the chance and thus it will run for 2 time
units.
As we know that process P2 will run 2 times in the interval of 5 time units and
process P3 will run 2 times in the interval of 10 time units, they have fulfilled the
criteria and thus now process P1 which has the least priority will get the chance
and it will run for 1 time. And here the interval of five time units have completed.
Because of its priority P2 will preempt P1 and thus will run 2 times. As P3 have
completed its 2 time units for its interval of 10 time units, P1 will get chance and it
will run for the remaining 2 times, completing its execution which was thrice in 20
time units.
Now 9-10 interval remains idle as no process needs it. At 10 time units, process P2
will run for 2 times completing its criteria for the third interval ( 10-15 ). Process
P3 will now run for two times completing its execution. Interval 14-15 will again
remain idle for the same reason mentioned above. At 15 time unit, process P2 will
execute for two times completing its execution. This is how the rate monotonic
scheduling works.
Conditions :
The analysis of Rate monotonic scheduling assumes few properties that every process
should possess. They are :
1. Processes involved should not share the resources with other processes.
2. Deadlines must be similar to the time periods. Deadlines are deterministic.
3. Process running with highest priority that needs to run, will preempt all the other
processes.
4. Priorities must be assigned to all the processes according to the protocol of Rate
monotonic scheduling.
Advantages :
1. It is easy to implement.
2. If any static priority assignment algorithm can meet the deadlines then rate
monotonic scheduling can also do the same. It is optimal.
3. It consists of calculated copy of the time periods unlike other time-sharing
algorithms as Round robin which neglects the scheduling needs of the processes.
Disadvantages :
1. It is very difficult to support aperiodic and sporadic tasks under RMA.
2. RMA is not optimal when tasks period and deadline differ.
Earliest Deadline First (EDF) CPU
scheduling algorithm
Earliest Deadline First (EDF) is an optimal dynamic priority scheduling algorithm
used in real-time systems.
It can be used for both static and dynamic real-time scheduling.
EDF uses priorities to the jobs for scheduling. It assigns priorities to the task
according to the absolute deadline. The task whose deadline is closest gets the highest
priority. The priorities are assigned and changed in a dynamic fashion. EDF is very
efficient as compared to other scheduling algorithms in real-time systems. It can make
the CPU utilization to about 100% while still guaranteeing the deadlines of all the
tasks.
EDF includes the kernel overload. In EDF, if the CPU usage is less than 100%, then it
means that all the tasks have met the deadline. EDF finds an optimal feasible
schedule. The feasible schedule is one in which all the tasks in the system are
executed within the deadline. If EDF is not able to find a feasible schedule for all the
tasks in the real-time system, then it means that no other task scheduling algorithms in
real-time systems can give a feasible schedule. All the tasks which are ready for
execution should announce their deadline to EDF when the task becomes runnable.
EDF scheduling algorithm does not need the tasks or processes to be periodic and also
the tasks or processes require a fixed CPU burst time. In EDF, any executing task can
be preempted if any other periodic instance with an earlier deadline is ready for
execution and becomes active. Preemption is allowed in the Earliest Deadline First
scheduling algorithm.
Example:
Consider two processes P1 and P2.
Let the period of P1 be p1 = 50
Let the processing time of P1 be t1 = 25
Let the period of P2 be period2 = 75
Let the processing time of P2 be t2 = 30
Steps for solution:
1. Deadline pf P1 is earlier, so priority of P1>P2.
2. Initially P1 runs and completes its execution of 25 time.
3. After 25 times, P2 starts to execute until 50 times, when P1 is able to execute.
4. Now, comparing the deadline of (P1, P2) = (100, 75), P2 continues to execute.
5. P2 completes its processing at time 55.
6. P1 starts to execute until time 75, when P2 is able to execute.
7. Now, again comparing the deadline of (P1, P2) = (100, 150), P1 continues to
execute.
8. Repeat the above steps…
9. Finally at time 150, both P1 and P2 have the same deadline, so P2 will continue to
execute till its processing time after which P1 starts to execute.
Limitations of EDF scheduling algorithm:
 Transient Overload Problem
 Resource Sharing Problem
 Efficient Implementation Problem

 First-in-first-out
First-in-first-out (FIFO) scheduling describes an operating system which is not a
multitasking operating system.
 Each task runs until it is finished, and only after that is the next task started on
a first come first served basis.
 Shortest job first
 Shortest job first scheduling uses algorithms that will select always select a task
that will require the least amount of processor time to complete.
 Round robin.
Round robin scheduling uses algorithms that allow every task to execute for a
fixed amount to time.
 
A running task is interrupted an put to a waiting state if its execution time
expires.
 3.1 Scheduling Points
The scheduling points are the set of operating system events that result in an
invocation of the scheduler.
 There are three such events: task creation and task deletion. During each of
these events a method is called to select the next task to be run.
 A third scheduling point called the clock tick is a periodic event that is
triggered by a timer interrupt. When a timer expires, all of the tasks that are
waiting for it to complete are changed from the waiting state to the ready state.
 3.2 Ready List
The scheduler uses a data structure called the ready list to track the tasks that
are in the ready state.
 The ready list is implemented as an ordinary linked list, ordered by priority.
 So the head of this list is always the highest priority task that is ready to run.
 3.3 Idle task
 If there are no tasks in the ready state when the scheduler is called, the idle task
will be executed.
 The idle task looks the same in every operating system.
 The idle task is always considered to be in the ready state.
 4 CONTEXT SWITCH
 The actual process of changing from one task to another is called Context
Switch.
 Since contexts are processor-specific, so is the code that implements the
context switches, hence, it must always be written in assembly language.
 5 TASK SYNCHRONIZATION
All the tasks in the multitasking operating systems work together to solve a
larger problem and to synchronize their activities, they occasionally
communicate with one another.
 
For example, in the printer sharing device the printer task doesn’t have any
work to do until new data is supplied to it by one of the computer tasks.
 So the printer and the computer tasks must communicate with one another to
coordinate their access to common data buffers.
One way to do this is to use a data structure called a mutex.
 Mutexes are mechanisms provided by many operating systems to assist with
task synchronization.
 A mutex is a multitasking-aware binary flag. It is because the processes of
setting and clearing the binary flag are atomic (i.e. these operations cannot be
interrupted).
When this binary flag is set, the shared data buffer is assumed to be in use by
one of the tasks. All other tasks must wait until that flag is cleared before
reading or writing any of the data within that buffer.
 The atomicity of the mutex set and clear operations is enforced by the operating
system, which disables interrupts before reading or modifying the state of the
binary flag.

Tasks and scheduling


 November 18, 2016 Colin Walls
Tasks, Threads and Processes
We have already considered the multi-tasking concept – multiple quasi-
independent programs apparently running at the same time, under the control
of an operating system. Before we look at tasks in more detail, we need to
straighten out some more terminology.
We use the word “task” – and I will continue to do so – but it does not have a
very precise meaning. Two other terms – “thread” and “process” – are more
specific and we should investigate what they mean and how they are
differentiated.
Most RTOSes used in embedded applications employ a multi-thread model. A
number of threads may be running and they all share the same address space:
This means that a context swap is primarily a change from one set of CPU
register values to another. This is quite simple and fast. A potential hazard is the
ability of each thread to access memory belonging to the others or to the RTOS
itself.
The alternative is the multi-process model. If a number of processes are
running, each one has its own address space and cannot access the memory
associated with other processes or the RTOS:

This makes the context swap more complex and time consuming, as the OS
needs to set up the memory management unit (MMU) appropriately. Of course,
this architecture is only possible with a processor that supports an MMU.
Processes are supported by “high end” RTOSes and most desktop operating
systems. To further complicate matters, there may be support for multiple
threads within each process. This latter capability is rarely exploited in
conventional embedded applications.
A useful compromise may be reached, if an MMU is available, thus:
Many thread-based RTOSes support the use of an MMU to simply protect
memory from unauthorized access. So, while a task is in context, only its
code/data and necessary parts of the RTOS are “visible”; all the other memory is
disabled and an attempted access would cause an exception. This makes the
context switch just a little more complex, but renders the application more
secure. This may be called “Thread Protected Mode” or “Lightweight Process
Model”.
Schedulers
As we know, the illusion that all the tasks are running concurrently is achieved
by allowing each to have a share of the processor time. This is the core
functionality of a kernel. The way that time is allocated between tasks is termed
“scheduling”. The scheduler is the software that determines which task should
be run next. The logic of the scheduler and the mechanism that determines
when it should be run is the scheduling algorithm. We will look at a number of
scheduling algorithms in this section. Task scheduling is actually a vast subject,
with many whole books devoted to it. The intention here is to just give sufficient
introduction that you can understand what a given RTOS has to offer in this
respect.
Run to Completion (RTC) Scheduler
RTC scheduling is very simplistic and uses minimal resources. It is, therefore, an
ideal choice, if the application’s needs are fulfilled. Here is the timeline for a
system using RTC scheduling:
The scheduler simply calls the top level function of each task in turn. That task
has control of the CPU (interrupts aside) until the top level function executes
a return statement. If the RTOS supports task suspension, then any tasks that
are currently suspended are not run. This is a topic discussed below; see Task
Suspend . 
The big advantages of an RTC scheduler, aside from its simplicity, are the need
for just a single stack and the portability of the code (as no assembly language is
generally required). The downside is that a task can “hog” the CPU, so careful
program design is required. Although each task is started “from the top” each
time it is scheduled – unlike other kinds of schedulers which allow the code to
continue from where it left off – greater flexibility may be programmed by use of
static “state” variables, which determine the logic of each sequential call.

Round Robin (RR) Scheduler


An RR scheduler is similar to RTC, but more flexible and, hence, more complex.
In the same way, each task is run in turn (allowing for task suspension), thus:

However, with the RR scheduler, the task does not need to execute a return in
the top level function. It can relinquish the CPU at any time by making a call to
the RTOS. This call results in the kernel saving the context (all the registers –
including stack pointer and program counter) and loading the context of the
next task to be run. With some RTOSes, the processor may be relinquished – and
the task suspended – pending the availability of a kernel resource. This is more
sophisticated, but the principle is the same.
The greater flexibility of the RR scheduler comes from the ability for the tasks to
continue from where they left off without any accommodation in the
application code. The price for this flexibility is more complex, less portable
code and the need for a separate stack for each task.
Time Slice (TS) Scheduler
A TS scheduler is the next step in complexity from RR. Time is divided into
“slots”, with each task being allowed to execute for the duration of its slot, thus:

In addition to being able to relinquish the CPU voluntarily, a task is preempted


by a scheduler call made from a clock tick interrupt service routine. The idea of
simply allocating each task a fixed time slice is very appealing – for applications
where it fits the requirements – as it is easy to understand and very predictable.
The only downside of simple TS scheduling is the proportion of CPU time
allocated to each task varies, depending upon whether other tasks are
suspended or relinquish part of their slots, thus:
A more predictable TS scheduler can be constructed if the concept of a
“background” task is introduced. The idea, shown here, is for the background
task to be run instead of any suspended tasks and to be allocated the remaining
slot time when a task relinquishes (or suspends itself).

Obviously the background task should not do any time-critical work, as the
amount of CPU time it is allocated is totally unpredictable – it may never be
scheduled at all.
This design means that each task can predict when it will be scheduled again.
For example, if you have 10ms slots and 10 tasks, a task knows that, if it
relinquishes, it will continue executing after 100ms. This can lead to elegant
timing loops in application tasks.
An RTOS may offer the possibility for different time slots for each task. This
offers greater flexibility, but is just as predictable as with fixed slot size. Another
possibility is to allocate more than one slot to the same task, if you want to
increase its proportion of allocated processor time.
Priority Scheduler
Most RTOSes support Priority scheduling. The idea is simple: each task is
allocated a priority and, at any particular time, whichever task has the highest
priority and is “ready” is allocated the CPU, thus:

The scheduler is run when any “event” occurs (e.g. interrupt or certain kernel
service calls) that may cause a higher priority task being made “ready”. There
are broadly three circumstances that might result in the scheduler being run:

 The task suspends itself; clearly the scheduler is required to determine


which task to run next.
 The task readies another task (by means of an API call) of higher priority.
 An interrupt service routine (ISR) readies another task of higher priority.
This could be an input/output device ISR or it may be the result of the
expiration of a timer (which are supported my many RTOSes – we will
look at them in detail in a future article).

The number of levels of priority varies (from 8 to many hundreds) and the
significance of higher and lower values differs; some RTOSes use priority 0 as
highest, others as lowest.
Some RTOSes only allow a single task at each priority level; others permit
multiple tasks at each level, which complicates the associated data structures
considerably. Many OSes allow task priorities to be changed at runtime, which
adds further complexity.
Composite Scheduler
We have looked at RTC, RR, TS and Priority schedulers, but many commercial
RTOS products offer more sophisticated schedulers, which have characteristics
of more than one of these algorithms. For example, an RTOS may support
multiple tasks at each priority level and then use time slicing to divide time
between multiple ready tasks at the highest level.
Task States
At any one moment in time, just one task is actually running. Aside from CPU
time spent running interrupt service routines (more on that in the next article)
or the scheduler, the “current” task is the one whose code is currently being
executed and whose data is characterized by the current register values. There
may be other tasks that are “ready” (to run) and these will be considered when
the scheduler is executed. In a simple RTOS, using a Run to Completion, Round
Robin or Time Slice scheduler, this may be the whole story. But, more
commonly, and always with a Priority scheduler, tasks may also be in a
“suspended” state, which means that they are not considered by the scheduler
until they are resumed and made “ready”.
Task Suspend
Task suspension may be quite simple – a task suspends itself (by making an API
call) or another task suspends it. Another API call needs to be made by another
task or ISR to resume the suspended task. This is an “unconditional” or “pure”
suspend. Some OSes refer to a task as being “asleep”.
An RTOS may offer the facility for a task to suspend itself (go to sleep) for a
specific period of time, at the end of which it is resumed (by the system clock
ISR, see below). This may be termed “sleep suspend”.
Another more complex suspend may be offered, if an RTOS supports “blocking”
API calls. Such a call permits the task to request a service or resource, which it
will receive immediately if it is available, otherwise it is suspended until it is
available. There may also be a timeout option whereby a task is resumed if the
resource is not available in a specific timeframe.
Other Task States
Many RTOSes support other task states, but the definition of these and the
terminology used varies. Possibilities include a “finished” state, which simply
means that the task’s outermost function has exited (either by executing
a return or just ending the outer function block). For a finished task to run
again, it would probably need to be reset in some way.
Another possibility is a “terminated” state. This is like a pure suspend, except
that the task must be reset to its initial state in order to run again.
If an RTOS supports dynamic creation and deletion of tasks (see the next
article), this implies another possible task state: “deleted”.

Embedded Systems - Interrupts


An interrupt is a signal to the processor emitted by hardware or software indicating
an event that needs immediate attention. Whenever an interrupt occurs, the
controller completes the execution of the current instruction and starts the execution
of an Interrupt Service Routine (ISR) or Interrupt Handler. ISR tells the processor
or controller what to do when the interrupt occurs. The interrupts can be either
hardware interrupts or software interrupts.

Hardware Interrupt
A hardware interrupt is an electronic alerting signal sent to the processor from an
external device, like a disk controller or an external peripheral. For example, when
we press a key on the keyboard or move the mouse, they trigger hardware interrupts
which cause the processor to read the keystroke or mouse position.

Software Interrupt
A software interrupt is caused either by an exceptional condition or a special
instruction in the instruction set which causes an interrupt when it is executed by the
processor. For example, if the processor's arithmetic logic unit runs a command to
divide a number by zero, to cause a divide-by-zero exception, thus causing the
computer to abandon the calculation or display an error message. Software interrupt
instructions work similar to subroutine calls.

What is Polling?
The state of continuous monitoring is known as polling. The microcontroller keeps
checking the status of other devices; and while doing so, it does no other operation
and consumes all its processing time for monitoring. This problem can be addressed
by using interrupts.
In the interrupt method, the controller responds only when an interruption occurs.
Thus, the controller is not required to regularly monitor the status (flags, signals etc.)
of interfaced and inbuilt devices.
Interrupts v/s Polling
Here is an analogy that differentiates an interrupt from polling −

Interrupt Polling

An interrupt is like a shopkeeper. If one The polling method is like a salesperson. The
needs a service or product, he goes to salesman goes from door to door while
him and apprises him of his needs. In requesting to buy a product or service.
case of interrupts, when the flags or Similarly, the controller keeps monitoring the
signals are received, they notify the flags or signals one by one for all devices and
controller that they need to be serviced. provides service to whichever component that
needs its service.
Interrupt Service Routine
For every interrupt, there must be an interrupt service routine (ISR), or interrupt
handler. When an interrupt occurs, the microcontroller runs the interrupt service
routine. For every interrupt, there is a fixed location in memory that holds the
address of its interrupt service routine, ISR. The table of memory locations set aside
to hold the addresses of ISRs is called as the Interrupt Vector Table.

Interrupt Vector Table


There are six interrupts including RESET in 8051.

Interrupts ROM Location (Hex) Pin

Interrupts ROM Location (HEX)

Serial COM (RI and TI) 0023

Timer 1 interrupts(TF1) 001B

External HW interrupt 1 (INT1) 0013 P3.3 (13)


External HW interrupt 0 (INT0) 0003 P3.2 (12)

Timer 0 (TF0) 000B

Reset 0000 9

 When the reset pin is activated, the 8051 jumps to the address location 0000.
This is power-up reset.
 Two interrupts are set aside for the timers: one for timer 0 and one for timer 1.
Memory locations are 000BH and 001BH respectively in the interrupt vector
table.
 Two interrupts are set aside for hardware external interrupts. Pin no. 12 and
Pin no. 13 in Port 3 are for the external hardware interrupts INT0 and INT1,
respectively. Memory locations are 0003H and 0013H respectively in the
interrupt vector table.
 Serial communication has a single interrupt that belongs to both receive and
transmit. Memory location 0023H belongs to this interrupt.

Steps to Execute an Interrupt


When an interrupt gets active, the microcontroller goes through the following steps −
 The microcontroller closes the currently executing instruction and saves the
address of the next instruction (PC) on the stack.
 It also saves the current status of all the interrupts internally (i.e., not on the
stack).
 It jumps to the memory location of the interrupt vector table that holds the
address of the interrupts service routine.
 The microcontroller gets the address of the ISR from the interrupt vector table
and jumps to it. It starts to execute the interrupt service subroutine, which is
RETI (return from interrupt).
 Upon executing the RETI instruction, the microcontroller returns to the location
where it was interrupted. First, it gets the program counter (PC) address from
the stack by popping the top bytes of the stack into the PC. Then, it start to
execute from that address.

Edge Triggering vs. Level Triggering


Interrupt modules are of two types − level-triggered or edge-triggered.

Level Triggered Edge Triggered

A level-triggered interrupt module An edge-triggered interrupt module generates an


always generates an interrupt interrupt only when it detects an asserting edge of
whenever the level of the interrupt the interrupt source. The edge gets detected when
source is asserted. the interrupt source level actually changes. It can
also be detected by periodic sampling and detecting
an asserted level when the previous sample was
de-asserted.

If the interrupt source is still asserted Edge-triggered interrupt modules can be acted
when the firmware interrupt handler immediately, no matter how the interrupt source
handles the interrupt, the interrupt behaves.
module will regenerate the interrupt,
causing the interrupt handler to be
invoked again.

Level-triggered interrupts are Edge-triggered interrupts keep the firmware's code


cumbersome for firmware. complexity low, reduce the number of conditions for
firmware, and provide more flexibility when
interrupts are handled.

Enabling and Disabling an Interrupt


Upon Reset, all the interrupts are disabled even if they are activated. The interrupts
must be enabled using software in order for the microcontroller to respond to those
interrupts.
IE (interrupt enable) register is responsible for enabling and disabling the interrupt.
IE is a bitaddressable register.
Interrupt Enable Register
EA - ET2 ES ET1 EX1 ET0 EX0

 EA − Global enable/disable.


 - − Undefined.
 ET2 − Enable Timer 2 interrupt.
 ES − Enable Serial port interrupt.
 ET1 − Enable Timer 1 interrupt.
 EX1 − Enable External 1 interrupt.
 ET0 − Enable Timer 0 interrupt.
 EX0 − Enable External 0 interrupt.
To enable an interrupt, we take the following steps −
 Bit D7 of the IE register (EA) must be high to allow the rest of register to take
effect.
 If EA = 1, interrupts will be enabled and will be responded to, if their
corresponding bits in IE are high. If EA = 0, no interrupts will respond, even if
their associated pins in the IE register are high.
Interrupt Priority in 8051
We can alter the interrupt priority by assigning the higher priority to any one of the
interrupts. This is accomplished by programming a register called IP (interrupt
priority).
The following figure shows the bits of IP register. Upon reset, the IP register contains
all 0's. To give a higher priority to any of the interrupts, we make the corresponding
bit in the IP register high.

- - - - PT1 PX1 PT0 PX0

- IP.7 Not Implemented.

- IP.6 Not Implemented.

- IP.5 Not Implemented.

- IP.4 Not Implemented.

PT1 IP.3 Defines the Timer 1 interrupt priority level.

PX1 IP.2 Defines the External Interrupt 1 priority level.

PT0 IP.1 Defines the Timer 0 interrupt priority level.

PX0 IP.0 Defines the External Interrupt 0 priority level.

Interrupt inside Interrupt


What happens if the 8051 is executing an ISR that belongs to an interrupt and
another one gets active? In such cases, a high-priority interrupt can interrupt a low-
priority interrupt. This is known as interrupt inside interrupt. In 8051, a low-priority
interrupt can be interrupted by a high-priority interrupt, but not by any another low-
priority interrupt.

Triggering an Interrupt by Software


There are times when we need to test an ISR by way of simulation. This can be done
with the simple instructions to set the interrupt high and thereby cause the 8051 to
jump to the interrupt vector table. For example, set the IE bit as 1 for timer 1. An
instruction SETB TF1 will interrupt the 8051 in whatever it is doing and force it to
jump to the interrupt vector table.
Memory Management
Discussed in 3rd module check 3rd module notes

Priority Inversion
Priority inversion is a operating system scenario in which a higher priority process is
preempted by a lower priority process. This implies the inversion of the priorities of
the two processes.

Problems due to Priority Inversion


Some of the problems that occur due to priority inversion are given as follows −

 A system malfunction may occur if a high priority process is not provided the required
resources.
 Priority inversion may also lead to implementation of corrective measures. These may
include the resetting of the entire system.
 The performance of the system can be reduces due to priority inversion. This may
happen because it is imperative for higher priority tasks to execute promptly.
 System responsiveness decreases as high priority tasks may have strict time
constraints or real time response guarantees.
 Sometimes there is no harm caused by priority inversion as the late execution of the
high priority process is not noticed by the system.

Solutions of Priority Inversion


Some of the solutions to handle priority inversion are given as follows −

 Priority Ceiling
All of the resources are assigned a priority that is equal to the highest priority
of any task that may attempt to claim them. This helps in avoiding priority
inversion.
 Disabling Interrupts
There are only two priorities in this case i.e. interrupts disabled and
preemptible. So priority inversion is impossible as there is no third option.
 Priority Inheritance
This solution temporarily elevates the priority of the low priority task that is
executing to the highest priority task that needs the resource. This means that
medium priority tasks cannot intervene and lead to priority inversion.
 No blocking
Priority inversion can be avoided by avoiding blocking as the low priority task
blocks the high priority task.
 Random boosting
The priority of the ready tasks can be randomly boosted until they exit the
critical section.
Priority Inheritance Protocol (PIP) in
Synchronization
 Last Updated : 25 May, 2020

 Read

 Discuss
Prerequisite – Introduction of Process Synchronization
Priority Inheritance Protocol (PIP) is a technique which is used for sharing critical
resources among different tasks. This allows the sharing of critical resources among
different without the occurrence of unbounded priority inversions.
Basic Concept of PIP :
The basic concept of PIP is that when a task goes through priority inversion, the
priority of the lower priority task which has the critical resource is increased by the
priority inheritance mechanism. It allows this task to use the critical resource as early
as possible without going through the preemption. It avoids the unbounded priority
inversion.
Working of PIP :
 When several tasks are waiting for the same critical resource, the task which is
currently holding this critical resource is given the highest priority among all the
tasks which are waiting for the same critical resource.
 Now after the lower priority task having the critical resource is given the highest
priority then the intermediate priority tasks can not preempt this task. This helps in
avoiding the unbounded priority inversion.
 When the task which is given the highest priority among all tasks, finishes the job
and releases the critical resource then it gets back to its original priority value
(which may be less or equal).
 If a task is holding multiple critical resources then after releasing one critical
resource it can not go back to it original priority value. In this case it inherits the
highest priority among all tasks waiting for the same critical resource.
If the critical resource is free then
allocate the resource
If the critical resource is held by higher priority task then
wait for the resource
If the critical resource is held by lower priority task
{
lower priority task is provided the highest priority
other tasks wait for the resource
}
Advantages of PIP :
Priority Inheritance protocol has the following advantages:
 It allows the different priority tasks to share the critical resources.
 The most prominent advantage with Priority Inheritance Protocol is that it avoids
the unbounded priority inversion.
Disadvantages of PIP :
Priority Inheritance Protocol has two major problems which may occur:
 Deadlock –
There is possibility of deadlock in the priority inheritance protocol.
For example, there are two tasks T1 and T2. Suppose T1 has the higher priority than
T2. T2 starts running first and holds the critical resource CR2.
After that, T1 arrives and preempts T2. T1 holds critical resource CR1 and also tries
to hold CR2 which is held by T2. Now T1 blocks and T2 inherits the priority of
T1 according to PIP. T2 starts execution and now T2 tries to hold CR1 which is held
by T1.
Thus, both T1 and T2 are deadlocked.
 Chain Blocking –
When a task goes through priority inversion each time it needs a resource then this
process is called chain blocking.
For example, there are two tasks T1 and T2. Suppose T1 has the higher priority than
T2. T2 holds the critical resource CR1 and CR2. T1 arrives and requests for CR1.
T2 undergoes the priority inversion according to PIP.
Now, T1 request CR2, again T2 goes for priority inversion according to PIP.
Hence, multiple priority inversion to hold the critical resource leads to chain
blocking.

How to use priority


inheritance
 May 18, 2004 Embedded Staff
Fatal embraces, deadlocks, and obscure bugs await the programmer who isn't
careful about priority inversions.
A preemptive real-time operating system (RTOS) forms the backbone of most embedded
systems devices, from digital cameras to life-saving medical equipment. The RTOS can
schedule an application's activities so that they appear to occur simultaneously. By
rapidly switching from one activity to the next, the RTOS is able to quickly respond to
real-world events.
To ensure rapid response times, an embedded RTOS can use preemption, in which a
higher-priority task can interrupt a low-priority task that's running. When the high-
priority task finishes running, the low-priority task resumes executing from the point at
which it was interrupted. The use of preemption guarantees worst-case performance
times, which enable use of the application in safety-critical situations.
Unfortunately, the need to share resources between tasks operating in a preemptive
multitasking environment can create conflicts. Two of the most common problems are
deadlock and priority inversion, both of which can result in application failure. In 1997,
the Mars Pathfinder mission nearly failed because of an undetected priority inversion.
When the rover was collecting meteorological data on Mars, it began experiencing
system resets, losing data. The problem was traced to priority inversion. A solution to
the inversion was developed and uploaded to the rover, and the mission completed
successfully. Such a situation might have been avoided had the designers of the rover
accounted for the possibility of priority inversion.1
This article describes in detail the problem of priority inversion and indicates two
common solutions. Also provided are detailed strategies for avoiding priority inversion.
Avoiding priority inversion is preferable to most other solutions, which generally require
more code, more memory, and more overhead when accessing shared resources.

Priority inversion
Priority inversion occurs when a high-priority task is forced to wait for the release of a
shared resource owned by a lower-priority task. The two types of priority inversion,
bounded and unbounded, occur when two tasks attempt to access a single shared
resource. A shared resource can be anything that must be used by two or more tasks in
a mutually exclusive fashion. The period of time that a task has a lock on a shared
resource is called the task's critical section or critical region.

Figure 1: Bounded priority inversion


Bounded priority inversion , shown in Figure 1, occurs when low-priority Task L acquires
a lock on a shared resource, but before releasing the resource is preempted by high-
priority Task H.2 Task H attempts to acquire the resource but is forced to wait for Task L
to finish its critical section. Task L continues running until it releases the resource, at
which point Task H acquires the resource and resumes executing. The worst-case wait
time for Task H is equal to the length of the critical section of Task L. Bounded priority
inversion won't generally hurt an application provided the critical section of Task L
executes in a timely manner.

Figure 2: Unbounded priority inversion


Unbounded priority inversion , shown in Figure 2, occurs when an intervening task
extends a bounded priority inversion, possibly forever.2 In the previous example,
suppose medium-priority Task M preempts Task L during the execution of Task L's
critical section. Task M runs until it relinquishes control of the processor. Only when Task
M turns over control can Task L finish executing its critical section and release the shared
resource. This extension of the critical region leads to unbounded priority inversion.
When Task L releases the resource, Task H can finally acquire the resource and resume
execution. The worst-case wait time for Task H is now equal to the sum of the worst-
case execution times of Task M and the critical section of Task L. Unbounded priority
inversion can have much more severe consequences than a bounded priority inversion.
If Task M runs indefinitely, neither Task L nor Task H will get an opportunity to resume
execution.
Figure 3: Chain of nested resource locks
Priority inversion can have an even more severe effect on an application when there
are nested resource locks , as shown in Figure 3. Suppose Task 1 is waiting for Resource
A. Resource A is owned by lower-priority Task 2, which is waiting for Resource B.
Resource B is owned by still lower-priority Task 3, which is waiting for Resource C, which
is being used by an even lower priority Task 4. Task 1 is blocked, forced to wait for tasks
4, 3, and 2 to finish their critical regions before it can begin execution. Such a chain of
nested locks is difficult to resolve quickly and efficiently. The risk of an unbounded
priority inversion is also high if many tasks intervene between tasks 1 and 4.

Figure 4: Deadlock

Deadlock
Deadlock , shown in Figure 4, is a special case of nested resource locks, in which a
circular chain of tasks waiting for resources prevents all the tasks in the chain from
executing.2 Deadlocked tasks can have potentially fatal consequences for the application.
Suppose Task A is waiting for a resource held by Task B, while Task B is waiting for a
resource held by Task C, which is waiting for a resource held by Task A. None of the
three tasks is able to acquire the resource it needs to resume execution, so the
application is deadlocked.

Priority ceiling protocol


One way to solve priority inversion is to use the priority ceiling protocol , which gives
each shared resource a predefined priority ceiling. When a task acquires a shared
resource, the task is hoisted (has its priority temporarily raised) to the priority ceiling of
that resource. The priority ceiling must be higher than the highest priority of all tasks
that can access the resource, thereby ensuring that a task owning a shared resource
won't be preempted by any other task attempting to access the same resource. When
the hoisted task releases the resource, the task is returned to its original priority level.
Any operating system that allows task priorities to change dynamically can be used to
implement the priority ceiling protocol.3
A static analysis of the application is required to determine the priority ceiling for each
shared resource, a process that is often difficult and time consuming. To perform a static
analysis, every task that accesses each shared resource must be known in advance. This
might be difficult, or even impossible, to determine for a complex application.
The priority ceiling protocol provides a good worst-case wait time for a high-priority task
waiting for a shared resource. The worst-case wait time is limited to the longest critical
section of any lower-priority task that accesses the shared resource. The priority ceiling
protocol prevents deadlock by stopping chains of nested locks from developing.
On the downside, the priority ceiling protocol has poor average-case response time
because of the significant overhead associated with implementing the protocol. Every
time a shared resource is acquired, the acquiring task must be hoisted to the resource's
priority ceiling. Conversely, every time a shared resource is released, the hoisted task's
priority must be lowered to its original level. All this extra code takes time.
By hoisting the acquiring task to the priority ceiling of the resource, the priority ceiling
protocol prevents locks from being contended. Because the hoisted task has a priority
higher than that of any other task that can request the resource, no task can contend
the lock. A disadvantage of the priority ceiling protocol is that the priority of a task
changes every time it acquires or releases a shared resource. These priority changes
occur even if no other task would compete for the resource at that time.
Medium-priority tasks are often unnecessarily prevented from running by the priority
ceiling protocol. Suppose a low-priority task acquires a resource that's shared with a
high-priority task. The low-priority task is hoisted to the resource's priority ceiling, above
that of the high-priority task. Any tasks with a priority below the resource's priority
ceiling that are ready to execute will be prevented from doing so, even if they don't use
the shared resource.

Priority inheritance protocol


An alternative to the priority ceiling protocol is the priority inheritance protocol , a
variation that uses dynamic priority adjustments. When a low-priority task acquires a
shared resource, the task continues running at its original priority level. If a high-priority
task requests ownership of the shared resource, the low-priority task is hoisted above
the requesting task. The low-priority task can then continue executing its critical section
until it releases the resource. Once the resource is released, the task is dropped back to
its original low-priority level, permitting the high-priority task to use the resource it has
just acquired.3
Because the majority of locks in real-time applications aren't contended, the priority
inheritance protocol has good average-case performance. When a lock isn't contended,
priorities don't change; there is no additional overhead. However, the worst-case
performance for the priority inheritance protocol is worse than the worst-case priority
ceiling protocol, since nested resource locks increase the wait time. The maximum
duration of the priority inversion is the sum of the execution times of all of the nested
resource locks. Furthermore, nested resource locks can lead to deadlock when you use
the priority inheritance protocol. That makes it important to design the application so
that deadlock can't occur.
Nested resource locks should obviously be avoided if possible. An inadequate or
incomplete understanding of the interactions between tasks can lead to nested resource
locks. A well-thought-out design is the best tool a programmer can use to prevent these.
You can avoid deadlock by allowing each task to own only one shared resource at a time.
When this condition is met, the worst-case wait time matches the priority ceiling
protocol's worst-case wait. In order to prevent misuse, some operating systems that
implement priority inheritance don't allow nested locks. It might not be possible,
however, to eliminate nested resource locks in some applications without seriously
complicating the application.
But remember that allowing tasks to acquire multiple priority inheritance resources can
lead to deadlock and increase the worst-case wait time.
Priority inheritance is difficult to implement, with many complicated scenarios arising
when two or more tasks attempt to access the same resources. The algorithm for
resolving a long chain of nested resource locks is complex. It's possible to incur a lot of
overhead as hoisting one task results in hoisting another task, and another, until finally
some task is hoisted that has the resources needed to run. After executing its critical
section, each hoisted task must then return to its original priority.
Figure 5 shows the simplest case of the priority inheritance protocol in which a low-
priority task acquires a resource that's then requested by a higher priority task. Figure 6
shows a slightly more complex case, with a low-priority task owning a resource that's
requested by two higher-priority tasks. Figure 7 demonstrates the potential for
complexity when three tasks compete for two resources.

Figure 5: Simple priority inheritance

1. Task L receives control of the processor and begins executing.


 The task makes a request for Resource A.
2. Task L is granted ownership of Resource A and enters its critical region.
3. Task L is preempted by Task H, a higher-priority task.
 Task H begins executing and requests ownership of Resource A, which is owned by
Task L.
4. Task L is hoisted to a priority above Task H and resumes executing its critical region.
5. Task L releases Resource A and is lowered back to its original priority.
 Task H acquires ownership of Resource A and begins executing its critical region.
6. Task H releases Resource A and continues executing normally.
7. Task H finishes executing and Task L continues executing normally.
8. Task L finishes executing.
Figure 6: Three-task, one-resource priority inheritance

1. Task 3 gets control of the processor and begins executing.


 The task requests ownership of Resource A.
2. Task 3 acquires Resource A and begins executing its critical region.
3. Task 3 is preempted by Task 2, a higher-priority task.
 Task 2 begins executing normally and requests Resource A, which is owned by Task
3.
4. Task 3 is hoisted to a priority above Task 2 and resumes executing its critical region.
5. Task 3 is preempted by Task 1, a higher-priority task.
 Task 1 begins executing and requests Resource A, which is owned by Task 3.
6. Task 3 is hoisted to a priority above Task 1.
 Task 3 resumes executing its critical region.
7. Task 3 releases Resource A and is lowered back to its original priority.
 Task 1 acquires ownership of Resource A and begins executing its critical region.
8. Task 1 releases Resource A and continues executing normally.
9. Task 1 finishes executing. Task 2 acquires Resource A and begins executing its critical
region.
10. Task 2 releases Resource A and continues executing normally.
11. Task 2 finishes executing. Task 3 resumes and continues executing normally.
12. Task 3 finishes executing.

Figure 7: Three-task, two-resource priority inheritance

1. Task 3 is given control of the processor and begins executing. The task requests
Resource A.
2. Task 3 acquires ownership of Resource A and begins executing its critical region.
3. Task 3 is preempted by Task 2, a higher-priority task. Task 2 requests ownership of
Resource B.
4. Task 2 is granted ownership of Resource B and begins executing its critical region.
 The task requests ownership of Resource A, which is owned by Task 3.
5. Task 3 is hoisted to a priority above Task 2 and resumes executing its critical region.
6. Task 3 is preempted by Task 1, a higher-priority task.
 Task 1 requests Resource B, which is owned by Task 2.
7. Task 2 is hoisted to a priority above Task 1. However, Task 2 still can't execute
because it must wait for Resource A, which is owned by Task 3.
 Task 3 is hoisted to a priority above Task 2 and continues executing its critical
region.
8. Task 3 releases Resource A and is lowered back to its original priority.
 Task 2 acquires ownership of Resource A and resumes executing its critical region.
9. Task 2 releases Resource A and then releases Resource B. The task is lowered back to
its original priority.
 Task 1 acquires ownership of Resource B and begins executing its critical region.
10. Task 1 releases Resource B and continues executing normally.
11. Task 1 finishes executing. Task 2 resumes and continues executing normally.
12. Task 2 finishes executing. Task 3 resumes and continues executing normally.
13. Task 3 finishes executing.

Manage resource ownership


Most RTOSes that support priority inheritance require resource locks to be properly
nested, meaning the resources must be released in the reverse order to that in which
they were acquired. For example, a task that acquired Resource A and then Resource B
would be required to release Resource B before releasing Resource A.
Figure 7 provides an example of priority inheritance in which two resources are released
in the opposite order to that in which they were acquired. Task 2 acquired Resource A
before Resource B; Resource B was then released before Resource A. In this example,
Task 2 was able to release the resources in the proper order without adversely affecting
the application.
Many operating systems require resources to be released in the proper order because it's
difficult to implement the capability to do otherwise. However, situations occur in which
releasing the resources in the proper order is neither possible nor desirable. Suppose
there are two shared resources: Resource B can't be acquired without first owning
Resource A. At some point during the execution of the critical region with Resource B,
Resource A is no longer needed. Ideally, Resource A would now be released.
Unfortunately, many operating systems don't allow that. They require Resource A to be
held until Resource B is released, at which point Resource A can be released. If a higher-
priority task is waiting for Resource A, the task is kept waiting unnecessarily while the
resource's current owner executes.

Figure 8: Managing resource ownership (example 1)

1. Task L is given control of the processor and begins executing. Task L requests
Resource A.
2. Task L acquires ownership of Resource A and begins executing its critical region.
 Task L requests Resource B.
3. Task L acquires ownership of Resource B and continues executing its critical region.
4. Task L is preempted by Task H, a higher-priority task.
 Task H requests ownership of Resource A, which is owned by Task L.
5. Task L is hoisted above Task H and continues executing its critical region.
6. Task L releases Resource A even though it was acquired before Resource B.
 Task L is lowered to its original priority level.
 Task H acquires Resource A and begins executing its critical region.
 Note that low-priority Task L no longer prevents Task H from running, even though
Task L still owns Resource B.
7. Task H releases Resource A and continues executing normally.
8. Task H finishes executing and Task L continues executing its critical region.
9. Task L releases Resource B and continues executing normally.
10. Task L finishes executing.

In Figure 8, Task L releases its resources in the same order they were acquired, which
most priority inheritance implementations wouldn't allow. Upon releasing Resource A,
Task L drops to its original priority level. This allows Task H to acquire Resource A,
ending the priority inversion. After Task H has released the resource and stopped
executing, Task L can continue executing with Resource B, on which it has an
uncontested lock. If Task L had been required to release Resource B before releasing
Resource A, Task H would have been prevented from running until Task L had released
both resources. This would have unnecessarily lengthened the duration of the bounded
priority inversion.

Figure 9: Managing resource ownership (example 2)

1. Task 3 is given control of the processor and begins executing. Task 3 requests
Resource A.
2. Task 3 acquires ownership of Resource A and begins executing its critical region.
 Task 3 requests Resource B.
3. Task 3 acquires ownership of Resource B and continues executing its critical region.
4. Task 3 is preempted by Task 2, a higher-priority task.
 Task 2 requests ownership of Resource A, which is owned by Task 3.
5. Task 3 is hoisted above Task 2 and continues executing its critical region.
6. Task 3 is preempted by Task 1, a higher-priority task.
 Task 1 requests Resource B, which is owned by Task 3.
7. Task 3 is hoisted above Task 1 and continues executing its critical region.
8. Task 3 releases Resource A and continues executing with Resource B.
 Note that Task 3 is not dropped to either its previous priority or its original priority.
To do so would immediately produce a priority inversion that requires Task 3 to be
hoisted above Task 1 because Task 3 still owns Resource B.
9. Task 3 releases Resource B and is lowered to its original priority level.
 Task 1 acquires Resource B and continues executing its critical region.
10. Task 1 releases Resource B and continues executing normally.
11. Task 1 finishes executing and Task 2 acquires Resource A. Task 2 begins executing its
critical region.
12. Task 2 releases Resource A and continues executing normally.
13. Task 2 finishes executing and Task 3 continues executing normally.
14. Task 3 finishes executing.
The difficulty of implementing locks that aren't properly nested becomes apparent when
three tasks compete for two resources, as seen in Figure 9. When Resource A is released
at Time 8, low-priority Task 3 remains hoisted to the highest priority. An improperly
designed priority inheritance protocol would lower Task 3 to its original priority level,
which was the task's priority before acquiring Resource A. Task 3 would then have to be
immediately hoisted above Task 1 to avoid a priority inversion, because of the
contention for access to Resource B. Unbounded priority inversion could occur while Task
3 is momentarily lowered. A medium-priority task could preempt Task 3, extending the
priority inversion indefinitely.
The example in Figure 8 shows why it's sometimes desirable to release nested resources
“out of order,” or not the reverse of the order in which they were acquired. Although
such a capability is clearly advantageous, many implementations of the priority
inheritance protocol only support sequentially nested resource locks.
The example in Figure 9 helps show why it's more difficult to implement priority
inheritance while allowing resources to be released in any order. If a task owns multiple
shared resources and has been hoisted several times, care must be taken when the task
releases those resources. The task's priority must be adjusted to the appropriate level.
Failure to do so may result in unbounded priority inversion.

Avoid inversion
The best strategy for solving priority inversion is to design the system so that inversion
can't occur. Although priority ceilings and priority inheritance both prevent unbounded
priority inversion, neither protocol prevents bounded priority inversion. Priority inversion,
whether bounded or not, is inherently a contradiction. You don't want to have a high-
priority task wait for a low-priority task that holds a shared resource.
Prior to implementing an application, examine its overall design. If possible, avoid
sharing resources between tasks at all. If no resources are shared, priority inversion is
precluded.
If several tasks do use the same resource, consider combining them into a single task.
The sub-tasks can access the resource through a state machine in the combined task
without fear of priority inversion. Unless the competing sub-tasks are fairly simple,
however, the state machine might be too complex to justify.
Another way to prevent priority inversion is to ensure that all tasks that access a
common resource have the same priority. Although one task might still wait while
another task uses the resource, no priority inversion will occur because both tasks have
the same priority. Of course, this only works if the RTOS provides a non-preemptive
mechanism for gracefully switching between tasks of equal priority.
If you can't use any of these techniques to manage shared resources, consider giving a
“server task” sole possession of the resource. The server task can then regulate access
to the resource. When a “client task” needs the resource, it must call upon the server
task to perform the required operations and then wait for the server to respond. The
server task must be at a priority greater than that of the highest-priority client task that
will access the resource. This method of controlling access to a resource is similar to the
priority ceiling protocol and requires static analysis to determine the priority of the
server task. The method relies on RTOS message passing and synchronization services
instead of resource locks and dynamic task-priority adjustments.

Prioritize
Priority inversion is a serious problem that, if allowed to occur, can cause a system to
fail. It's generally simpler to avoid priority inversion than to solve it in software. If
possible, eliminate the need for shared resources altogether, avoiding any chance of
priority inversion. If you can't avoid priority inversion, at least make sure it's bounded.
Unbounded priority inversions can leave high-priority tasks unable to execute, resulting
in application failure. Two common methods of bounding priority inversion are the
priority ceiling protocol and the priority inheritance protocol. Neither protocol is perfect
for all situations. Hence, good analysis and design are always necessary to understand
which solution, or combination of solutions, is needed for your particular application.

Embedded Operating System (discusse in


Unit 2)
An embedded operating system is a computer operating system designed for use in
embedded computer systems. These operating systems are designed to be small,
resource-efficient, dependable, and reduce many features that aren't required by
specialized applications.

The hardware that runs an embedded operating system is usually quite resource-
constrained. Embedded hardware systems are typically quite specific, and it means
that these systems are designed to cover certain tasks due to limited resources.

In this article, you will learn about the embedded operating system with its types and
many other features.

What is Embedded Operating System?


An embedded operating system is a computer operating system designed for use in
embedded computer systems. It has limited features. The term "embedded operating
system" also refers to a "real-time operating system". The main goal of designing an
embedded operating system is to perform specified tasks for non-computer devices.
It allows the executing programming codes that deliver access to devices to
complete their jobs.

An embedded operating system is a combination of software and hardware. It


produces an easily understandable result by humans in many formats such
as images, text, and voice. Embedded operating systems are developed with
programming code, which helps convert hardware languages into software
languages like C and C++.

Advantages and disadvantages of Embedded


Operating System
There are various advantages and disadvantages of an embedded operating system.
Some of them are as follows:
Advantages
There are various advantages of an embedded operating system. Some of them are
as follows:

1. It is small in size and faster to load.


2. It is low cost.
3. It is easy to manage.
4. It provides better stability.
5. It provides higher reliability.
6. It provides some interconnections.
7. It has low power consumption.
8. It helps to increase the product quality.

Disadvantages
There are various disadvantages of an embedded operating system. Some of them
are as follows:

1. It isn't easy to maintain.


2. The troubleshooting is harder.
3. It has limited resources for memory.
4. It isn't easy to take a back of embedded files.
5. You can't change, improve, or upgrade an embedded system once it's been
developed.
6. If any problem occurs, you need to reset the setting.
7. Its hardware is limited.

Handheld Operating System


Handheld operating systems are present in all handheld devices like Smartphones
and tablets. It is also called a Personal Digital Assistant. The popular handheld
device in today’s market is android and iOS.
These operating systems need a high processing processor and also embedded
with different types of sensor.

You might also like