You are on page 1of 45

Process:

• A process is an instance of a computer program that is being executed. It contains the program code and its current
activity.

• Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute
instructions concurrently.

• Process-based multitasking enables you to run the Java compiler at the same time that you are using a text editor.

• In employing multiple processes with a single CPU, Context switching between various memory context is used.

• Each process has a complete set of its own variables.


Thread:
• A thread is a basic unit of CPU utilization, consisting of a program counter, a stack, and a set of registers.

• A thread of execution results from a fork of a computer program into two or more concurrently running tasks.

• The implementation of threads and processes differs from one operating system to another, but in most cases,
a thread is contained inside a process.

• Multiple threads can exist within the same process and share resources such as memory, while different
processes do not share these resources.

• Example of threads in same process is automatic spell check and automatic saving of a file while writing.
Threads are basically processes that run in the same memory context.

• Threads may share the same data while execution. Thread i.e. single thread vs multiple threads
Task:
A task is a set of program instructions that are loaded in memory.
Kernel Task/Process Management:
-Deals with
-setting up the memory space for the tasks
-loading the task’s code into the memory space
-allocating system resources
-setting up a Task Control Block (TCB) for the task and task/process termination/deletion.

• A Task Control Block (TCB) is used for holding the information corresponding to a task.

• TCB usually contains the following set of information

• Task ID: Task Identification Number


• Task State: The current state of the task. (E.g. State= ‘Ready’ for a task which is ready to execute)
• Task Type: Task type. Indicates what is the type for this task. The task can be a hard real time or soft real time or background task.
• Task Priority: Task priority (E.g. Task priority =1 for task with priority = 1)
• Task Context Pointer: Context pointer. Pointer for context saving
• Task Memory Pointers: Pointers to the code memory, data memory and stack memory for the task
• Task System Resource Pointers: Pointers to system resources (semaphores, mutex etc) used by the task
• Task Pointers: Pointers to other TCBs (TCBs for preceding, next and waiting tasks)
• Other Parameters Other relevant task parameters

The parameters and implementation of the TCB is kernel dependent. The TCB parameters vary across different kernels,
based on the task management implementation

Task/Process Scheduling: Deals with sharing the CPU among various tasks/processes. A kernel application
called ‘Scheduler’ handles the task scheduling. Scheduler is nothing but an algorithm implementation, which
performs the efficient and optimal scheduling of tasks to provide a deterministic behavior.

Task/Process Synchronization: Deals with synchronizing the concurrent access of a resource, which is shared
across multiple tasks and the communication between various tasks.

Error/Exception handling: Deals with registering and handling the errors occurred/exceptions raised during the
execution of tasks. Insufficient memory, timeouts, deadlocks, deadline missing, bus error, divide by zero,
unknown instruction execution etc, are examples of errors/exceptions. Errors/Exceptions can happen at the kernel
level services or at task level. Deadlock is an example for kernel level exception, whereas timeout is an example
for a task level exception. The OS kernel gives the information about the error in the form of a system call (API).
Multiprocessing & Multitasking

• The ability to execute multiple processes simultaneously is referred as multiprocessing


• Systems which are capable of performing multiprocessing are known as multiprocessor systems
• Multiprocessor systems possess multiple CPUs and can execute multiple processes simultaneously
• The ability of the Operating System to have multiple programs in memory, which are ready for execution, is referred as
multiprogramming
• Multitasking refers to the ability of an operating system to hold multiple processes in memory and switch the processor
(CPU) from executing one process to another process
• Multitasking involves ‘Context switching’, ‘Context saving’ and ‘Context retrieval’
• Context switching refers to the switching of execution context from task to other
 When a task/process switching happens, the current context of execution should be saved to (Context saving)
retrieve it at a later point of time when the CPU executes the process, which is interrupted currently due to
execution switching
 During context switching, the context of the task to be executed is retrieved from the saved context list . This is

known as Context retrieval


Multiprogramming: The ability of the Operating System to have multiple programs in memory, which are ready for
execution, is referred as multiprogramming.

Types of Multitasking :
Depending on how the task/process execution switching act is implemented, multitasking can is classified into

Co-operative Multitasking:
 Co-operative multitasking is the most primitive form of multitasking in which a task/process gets a chance to execute
only when the currently executing task/process voluntarily relinquishes the CPU.
 In this method, any task/process can avail the CPU as much time as it wants. Since this type of implementation involves
the mercy of the tasks each other for getting the CPU time for execution, it is known as co-operative multitasking.
 If the currently executing task is non-cooperative, the other tasks may have to wait for a long time to get the CPU

Preemptive Multitasking:
• Preemptive multitasking ensures that every task/process gets a chance to execute.
• When and how much time a process gets is dependent on the implementation of the preemptive scheduling.
• As the name indicates, in preemptive multitasking, the currently running task/process is preempted to give a chance to
other tasks/process to execute.
• The preemption of task may be based on time slots or task/process priority
Non-preemptive Multitasking:
 The process/task, which is currently given the CPU time, is allowed to execute until it terminates (enters the ‘Completed’
state) or enters the ‘Blocked/Wait’ state, waiting for an I/O.
 The co- operative and non-preemptive multitasking differs in their behavior when they are in the ‘Blocked/Wait’ state.
 In co-operative multitasking, the currently executing process/task need not relinquish the CPU when it enters the
‘Blocked/Wait’ sate, waiting for an I/O, or a shared resource access or an event to occur whereas in non-preemptive
multitasking the currently executing task relinquishes the CPU when it waits for an I/O.
Task Scheduling:
• In a multitasking system, there should be some mechanism in place to share the CPU among the different tasks and to
decide which process/task is to be executed at a given point of time
• Determining which task/process is to be executed at a given point of time is known as task/process scheduling
• Task scheduling forms the basis of multitasking
• Scheduling policies forms the guidelines for determining which task is to be executed when
• The scheduling policies are implemented in an algorithm and it is run by the kernel as a service
• The kernel service/application, which implements the scheduling algorithm, is known as ‘Scheduler’
• The task scheduling policy can be pre-emptive, non-preemptive or co- operative
• Depending on the scheduling policy the process scheduling decision may take place when a process switches its state to ➢
‘Ready’ state from ‘Running’ state
➢ ‘Blocked/Wait’ state from ‘Running’ state
➢ ‘Ready’ state from ‘Blocked/Wait’ state
➢ ‘Completed’ state
Task Scheduling - Scheduler Selection:
The selection of a scheduling criteria/algorithm should consider
• CPU Utilization: The scheduling algorithm should always make the CPU utilization high. CPU utilization is a direct
measure of how much percentage of the CPU is being utilized.
• Throughput: This gives an indication of the number of processes executed per unit of time. The throughput for a good
scheduler should always be higher.
• Turnaround Time: It is the amount of time taken by a process for completing its execution. It includes the time spent by the
process for waiting for the main memory, time spent in the ready queue, time spent on completing the I/O operations, and
the time spent in execution. The turnaround time should be a minimum for a good scheduling algorithm.
• Waiting Time: It is the amount of time spent by a process in the ‘Ready’ queue waiting to get the CPU time for execution.
The waiting time should be minimal for a good scheduling algorithm.
• Response Time: It is the time elapsed between the submission of a process and the first response. For a good scheduling
algorithm, the response time should be as least as possible.

To summarize, a good scheduling algorithm has high CPU utilization, minimum Turn Around Time (TAT), maximum throughput and least response time.
Task Scheduling - Queues
The various queues maintained by OS in association with CPU scheduling are
• Job Queue: Job queue contains all the processes in the system
• Ready Queue: Contains all the processes, which are ready for execution and waiting for CPU to get their turn for
execution. The Ready queue is empty when there is no process ready for running.
• Device Queue: Contains the set of processes, which are waiting for an I/O device
Earliest Deadline First (EDF) CPU scheduling algorithm
• Earliest Deadline First (EDF) is an optimal dynamic priority scheduling algorithm used in real-time systems.
It can be used for both static and dynamic real-time scheduling.

• EDF uses priorities to the jobs for scheduling.

• It assigns priorities to the task according to the absolute deadline. The task whose deadline is closest gets the highest
priority.

• The priorities are assigned and changed in a dynamic fashion. EDF is very efficient as compared to other scheduling
algorithms in real-time systems. It can make the CPU utilization to about 100% while still guaranteeing the deadlines of all
the tasks.

• EDF includes the kernel overload. In EDF, if the CPU usage is less than 100%, then it means that all the tasks have met the
deadline. EDF finds an optimal feasible schedule.

• The feasible schedule is one in which all the tasks in the system are executed within the deadline.

• If EDF is not able to find a feasible schedule for all the tasks in the real-time system, then it means that no other task
scheduling algorithms in real-time systems can give a feasible schedule.

• All the tasks which are ready for execution should announce their deadline to EDF when the task becomes runnable.
• EDF scheduling algorithm does not need the tasks or processes to be periodic and also the tasks or processes require a fixed
CPU burst time.

• In EDF, any executing task can be preempted if any other periodic instance with an earlier deadline is ready for execution
and becomes active.

• Preemption is allowed in the Earliest Deadline First scheduling algorithm.


Example:
Consider two processes P1 and P2.
Let the period of P1 be p  = 50
1

Let the processing time of P1 be t  = 25


1

Let the period of P2 be period  = 75


2

Let the processing time of P2 be t  = 30


2
Steps for solution:
1.Deadline pf P1 is earlier, so priority of P1>P2.
2.Initially P1 runs and completes its execution of 25 time.
3.After 25 times, P2 starts to execute until 50 times, when P1 is able to execute.
4.Now, comparing the deadline of (P1, P2) = (100, 75), P2 continues to execute.
5.P2 completes its processing at time 55.
6.P1 starts to execute until time 75, when P2 is able to execute.
7.Now, again comparing the deadline of (P1, P2) = (100, 150), P1 continues to execute.
8.Repeat the above steps…
9.Finally at time 150, both P1 and P2 have the same deadline, so P2 will continue to execute till its processing time after
which P1 starts to execute.

Limitations of EDF scheduling algorithm:


•Transient Overload Problem
•Resource Sharing Problem
•Efficient Implementation Problem
Scheduling Points

• The scheduling points are the set of operating system events that result in an invocation of the scheduler.
• There are three such events: task creation and task deletion. 
• During each of these events a method is called to select the next task to be run.
• A third scheduling point called the clock tick is a periodic event that is triggered by a timer interrupt. When a timer
expires, all of the tasks that are waiting for it to complete are changed from the waiting state to the ready state.

 Ready List
The scheduler uses a data structure called the ready list to track the tasks that are in the ready state.
 The ready list is implemented as an ordinary linked list, ordered by priority.
 So the head of this list is always the highest priority task that is ready to run.

 Idle task
 If there are no tasks in the ready state when the scheduler is called, the idle task will be executed.
 The idle task looks the same in every operating system.
 The idle task is always considered to be in the ready state.
Context Switch 
• In computing, a context switch is the process of storing the state of a process or thread, so that it can be restored and
resume execution at a later point.

• This allows multiple processes to share a single CPU, and is an essential feature of a multitasking operating system.

• The precise meaning of the phrase “context switch” varies. In a multitasking context, it refers to the process of storing the
system state for one task, so that task can be paused and another task resumed.

• A context switch can also occur as the result of an interrupt, such as when a task needs to access disk storage, freeing up
CPU time for other tasks.

• Some operating systems also require a context switch to move between user mode and kernel mode tasks.

• The process of context switching can have a negative impact on system performance.
Context Switching Triggers
There are three major triggers for context switching. These are given as follows −

Multitasking: 
• In a multitasking environment, a process is switched out of the CPU so another process can be run.
• The state of the old process is saved and the state of the new process is loaded.
• On a pre-emptive system, processes may be switched out by the scheduler.

Interrupt Handling:
• The hardware switches a part of the context when an interrupt occurs.
• This happens automatically.
• Only some of the context is changed to minimize the time required to handle the interrupt.

User and Kernel Mode Switching: 


• A context switch may take place when a transition between the user mode and kernel mode is required in the operating
system.
Context Switching Steps
The steps involved in context switching are as follows −

• Save the context of the process that is currently running on the CPU. Update the process control block and other important
fields.

• Move the process control block of the above process into the relevant queue such as the ready queue, I/O queue etc.

• Select a new process for execution.

• Update the process control block of the selected process. This includes updating the process state to running.

• Update the memory management data structures as required.

• Restore the context of the process that was previously running when it is loaded again on the processor. This is done by
loading the previous values of the process control block and registers.
• A context switch is the mechanism to store and restore the state or context of a CPU in Process Control block (PCB) so
that a process execution can be resumed from the same point at a later time.

• So Now you will think how exactly you can resume the the process (let’s say Process-1) if you give control to another
process(let’s say Process-2) :Here,

1.Program Counter comes in to picture which Stores the address of the Instruction from where you will resume your
execution i mean, after the instruction which is already executed of process-1.

2.File Manager stores all the data you have written so that when process-1 will Resume it can Retrieve the data and
process further.

3.Process Control Block


(let’s say PCB-0) store/save the executed data.
After all these, control given to another process(let’s say Process-2) and another PCB (let’s say PCB-1) stores the executed
instructions.
so, if again our control comes to process-1 then it will load PCB-0 and can resume it.
Context Switching Cost

• Context Switching leads to an overhead cost because of TLB flushes, sharing the cache between multiple tasks, running the
task scheduler etc.

• Context switching between two threads of the same process is faster than between two different processes as threads have
the same virtual memory maps. Because of this TLB flushing is not required.
The need for Context switching
• A context switching helps to share a single CPU across all processes to complete its execution and store the system's tasks
status. When the process reloads in the system, the execution of the process starts at the same point where there is
conflicting.
Following are the reasons that describe the need for context switching in the Operating system.

• The switching of one process to another process is not directly in the system. A context switching helps the operating
system that switches between the multiple processes to use the CPU's resource to accomplish its tasks and store its context.
We can resume the service of the process at the same point later. If we do not store the currently running process's data or
context, the stored data may be lost while switching between processes.

• If a high priority process falls into the ready queue, the currently running process will be shut down or stopped by a high
priority process to complete its tasks in the system.

• If any running process requires I/O resources in the system, the current process will be switched by another process to use
the CPUs. And when the I/O requirement is met, the old process goes into a ready state to wait for its execution in the
CPU. Context switching stores the state of the process to resume its tasks in an operating system. Otherwise, the process
needs to restart its execution from the initials level.
• If any interrupts occur while running a process in the operating system, the process status is saved as registers using
context switching. After resolving the interrupts, the process switches from a wait state to a ready state to resume its
execution at the same point later, where the operating system interrupted occurs.

• A context switching allows a single CPU to handle multiple process requests simultaneously without the need for any
additional processors.

Context switching triggers


Following are the three types of context switching triggers as follows.
1. Interrupts
2. Multitasking
3. Kernel/User switch

Interrupts: A CPU requests for the data to read from a disk, and if there are any interrupts, the context switching automatic
switches a part of the hardware that requires less time to handle the interrupts.

Multitasking: A context switching is the characteristic of multitasking that allows the process to be switched from the CPU so
that another process can be run. When switching the process, the old state is saved to resume the process's execution at the
same point in the system.

Kernel/User Switch: It is used in the operating systems when switching between the user mode, and the kernel/user mode is
performed.
Task Synchronization
Inter-task Communication and Synchronization
• All the tasks in the multitasking operating systems work together to solve a larger problem and to synchronize their
activities, they occasionally communicate with one another.

• For example, in the printer sharing device the printer task doesn’t have any work to do until new data is supplied to it by one
of the computer tasks.

• So the printer and the computer tasks must communicate with one another to coordinate their access to common data
buffers.
Shared Variables or Memory Areas

• A simplistic approach to inter-task communication is to just have variables or memory areas which are accessible to all
the tasks concerned. 

• Whilst it is very primitive, this approach may be applicable to some applications. There is a need to control access. 

• If the variable is simply a byte, then a write or a read to it will probably be an “atomic” (i.e. uninterruptible) operation,
but care is needed if the processor allows other operations on bytes of memory, as they may be interruptible and a
timing problem could result. One way to effect a lock/unlock is simply to disable interrupts for a short time.

• If you are using a memory area, of course you still need locking. Using the first byte as a locking flag is a possibility,
assuming that the memory architecture facilitates atomic access to this byte.

• One task loads data into the memory area, sets the flag and then waits for it to clear. The other task waits for the flag to
be set, reads the data and clears the flag.

• Using interrupt disable as a lock is less wise, as moving the whole buffer of data may take time.

• This type of shared memory usage is similar in style to the way many inter-processor communication facilities are
implemented in multicore systems.

• In some cases, a hardware lock and/or an interrupt are incorporated into the inter-processor shared memory interface.
Signals

• Signals are probably the simplest inter-task communication facility offered in conventional RTOSes.

• They consist of a set of bit flags – there may be 8, 16 or 32, depending on the specific implementation – which is
associated with a specific task.

• A signal flag (or several flags) may be set by any task using an OR type of operation.

• Only the task that owns the signals can read them.

• The reading process is generally destructive – i.e. the flags are also cleared.

• In some systems, signals are implemented in a more sophisticated way such that a special function – nominated by the
signal owning task – is automatically executed when any signal flags are set.

• This removes the necessity for the task to monitor the flags itself. This is somewhat analogous to an interrupt service
routine.
Event Flag Groups

• Event flag groups are like signals in that they are a bit-oriented inter-task communication facility. They may similarly be
implemented in groups of 8, 16 or 32 bits.

• They differ from signals in being independent kernel objects; they do not “belong” to any specific task.

• Any task may set and clear event flags using OR and AND operations.

• Likewise, any task may interrogate event flags using the same kind of operation.

• In many RTOSes, it is possible to make a blocking API call on an event flag combination; this means that a task may be
suspended until a specific combination of event flags has been set.

• There may also be a “consume” option available, when interrogating event flags, such that all read flags are cleared.

• There is more information about event flag groups in a future article, which describes their implementation in Nucleus
SE.
Semaphores:
• A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled by another thread.

• This is different than a mutex as the mutex can be signaled only by the thread that is called the wait function.

• A semaphore uses two atomic operations, wait and signal for process synchronization.

• A Semaphore is an integer variable, which can be accessed only through two operations wait() and signal().

There are two types of semaphores:
• Binary Semaphores
• Counting Semaphores.

Binary Semaphores: 
• They can only be either 0 or 1. They are also known as mutex locks, as the locks can provide mutual exclusion.

• All the processes can share the same mutex semaphore that is initialized to 1. Then, a process has to wait until the lock
becomes 0.

• Then, the process can make the mutex semaphore 1 and start its critical section.

• When it completes its critical section, it can reset the value of the mutex semaphore to 0 and some other process can enter
its critical section.
•Counting Semaphores: 

• They can have any value and are not restricted over a certain domain. They can be used to control access to a resource that
has a limitation on the number of simultaneous accesses.

• The semaphore can be initialized to the number of instances of the resource. Whenever a process wants to use that resource,
it checks if the number of remaining instances is more than zero, i.e., the process has an instance available.

• Then, the process can enter its critical section thereby decreasing the value of the counting semaphore by 1.

• After the process is over with the use of the instance of the resource, it can leave the critical section thereby adding 1 to the
number of available instances of the resource.

Limitations :
1.One of the biggest limitations of semaphore is priority inversion.
2.Deadlock, suppose a process is trying to wake up another process which is not in a sleep state. Therefore, a deadlock may
block indefinitely.
3.The operating system has to keep track of all calls to wait and to signal the semaphore.
Fig. State diagram of a counting semaphore

Fig. 3 State diagram of a binary semaphore


Mutexes

• Mutual exclusion semaphores (mutexes) are independent kernel objects, which behave in a very similar way to normal
binary semaphores.

• They are slightly more complex and incorporate the concept of temporary ownership (of the resource, access to which
is being controlled).

• If a task obtains a mutex, only that same task can release it again – the mutex (and, hence, the resource) is temporarily
owned by the task.

• Mutexes are not provided by all RTOSes, but it is quite straightforward to adapt a regular binary semaphore.

• It would be necessary to write a “mutex obtain” function, which obtains the semaphore and notes the task identifier.

• Then a complementary “mutex release” function would check the calling task’s identifier and release the semaphore
only if it matches the stored value, otherwise it would return an error.
These are the common operations that an RTOS task can perform with a mutex:
•Create\Delete a mutex
•Get Ownership (acquire a lock on a shared resource)

  
• Release Ownership (release a lock on a shared resource)

FIg. 1 State diagram of a mutex


Interrupt Service Routine
• For every interrupt, there must be an interrupt service routine (ISR), or interrupt handler.

• When an interrupt occurs, the microcontroller runs the interrupt service routine.

• For every interrupt, there is a fixed location in memory that holds the address of its interrupt service routine, ISR. The table of
memory locations set aside to hold the addresses of ISRs is called as the Interrupt Vector Table.
Steps to Execute an Interrupt
When an interrupt gets active, the microcontroller goes through the following steps −
 The microcontroller closes the currently executing instruction and saves the address of the next instruction (PC)
on the stack.

 It also saves the current status of all the interrupts internally (i.e., not on the stack).

 It jumps to the memory location of the interrupt vector table that holds the address of the interrupts service
routine.

 The microcontroller gets the address of the ISR from the interrupt vector table and jumps to it. It starts to execute
the interrupt service subroutine, which is RETI (return from interrupt).

 Upon executing the RETI instruction, the microcontroller returns to the location where it was interrupted. First, it
gets the program counter (PC) address from the stack by popping the top bytes of the stack into the PC. Then, it
start to execute from that address.
Level Triggered Edge Triggered

A level-triggered interrupt module An edge-triggered interrupt module generates an


always generates an interrupt interrupt only when it detects an asserting edge of
whenever the level of the interrupt the interrupt source. The edge gets detected when
source is asserted. the interrupt source level actually changes. It can
also be detected by periodic sampling and detecting
an asserted level when the previous sample was
de-asserted.

If the interrupt source is still asserted Edge-triggered interrupt modules can be acted
when the firmware interrupt handler immediately, no matter how the interrupt source
handles the interrupt, the interrupt behaves.
module will regenerate the interrupt,
causing the interrupt handler to be
invoked again.

Level-triggered interrupts are Edge-triggered interrupts keep the firmware's code


cumbersome for firmware. complexity low, reduce the number of conditions for
firmware, and provide more flexibility when
interrupts are handled.
Enabling and Disabling an Interrupt

• Upon Reset, all the interrupts are disabled even if they are activated. The interrupts must be enabled using software in
order for the microcontroller to respond to those interrupts.

• IE (interrupt enable) register is responsible for enabling and disabling the interrupt. IE is a bitaddressable register.

EA - ET2 ES ET1 EX1 ET0 EX0

Interrupt Enable Register


•EA − Global enable/disable.
•- − Undefined.
•ET2 − Enable Timer 2 interrupt.
•ES − Enable Serial port interrupt.
•ET1 − Enable Timer 1 interrupt.
•EX1 − Enable External 1 interrupt.
•ET0 − Enable Timer 0 interrupt.
•EX0 − Enable External 0 interrupt.

To enable an interrupt, we take the following steps −


•Bit D7 of the IE register (EA) must be high to allow the rest of register to take effect.
•If EA = 1, interrupts will be enabled and will be responded to, if their corresponding bits in IE are high. If EA = 0, no interrupts will respond, even if their associated
pins in the IE register are high.
Priority Inversion

• Priority inversion is a operating system scenario in which a higher priority process is pre-empted by a lower priority process. This
implies the inversion of the priorities of the two processes.

Problems due to Priority Inversion


Some of the problems that occur due to priority inversion are given as follows −

 A system malfunction may occur if a high priority process is not provided the required resources.
 Priority inversion may also lead to implementation of corrective measures. These may include the resetting of
the entire system.
 The performance of the system can be reduced due to priority inversion. This may happen because it is
imperative for higher priority tasks to execute promptly.
 System responsiveness decreases as high priority tasks may have strict time constraints or real time response
guarantees.

Sometimes there is no harm caused by priority inversion as the late execution of the high priority process is not
noticed by the system.
Solutions of Priority Inversion
Some of the solutions to handle priority inversion are given as follows −
 Priority Ceiling
All of the resources are assigned a priority that is equal to the highest priority of any task that may attempt to
claim them. This helps in avoiding priority inversion.

 Disabling Interrupts
There are only two priorities in this case i.e. interrupts disabled and preemptible. So priority inversion is
impossible as there is no third option.

 Priority Inheritance
This solution temporarily elevates the priority of the low priority task that is executing to the highest priority
task that needs the resource. This means that medium priority tasks cannot intervene and lead to priority
inversion.

 No blocking
Priority inversion can be avoided by avoiding blocking as the low priority task blocks the high priority task.

 Random boosting
The priority of the ready tasks can be randomly boosted until they exit the critical section.
Priority Inheritance Protocol (PIP) is a technique which is used for sharing critical resources among different tasks. This
allows the sharing of critical resources among different without the occurrence of unbounded priority inversions.

Basic Concept of PIP :


• The basic concept of PIP is that when a task goes through priority inversion, the priority of the lower priority task which has
the critical resource is increased by the priority inheritance mechanism.
• It allows this task to use the critical resource as early as possible without going through the preemption. It avoids the
unbounded priority inversion.

Working of PIP :
 When several tasks are waiting for the same critical resource, the task which is currently holding this critical resource is
given the highest priority among all the tasks which are waiting for the same critical resource.

• Now after the lower priority task having the critical resource is given the highest priority then the intermediate priority tasks can not
preempt this task. This helps in avoiding the unbounded priority inversion.

• When the task which is given the highest priority among all tasks, finishes the job and releases the critical resource then it gets back to its
original priority value (which may be less or equal).

• If a task is holding multiple critical resources then after releasing one critical resource it can not go back to it original priority value. In this
case it inherits the highest priority among all tasks waiting for the same critical resource.
Advantages of PIP :
Priority Inheritance protocol has the following advantages:
 It allows the different priority tasks to share the critical resources.
 The most prominent advantage with Priority Inheritance Protocol is that it avoids the unbounded priority inversion.
Disadvantages of PIP :
Priority Inheritance Protocol has two major problems which may occur:

 Deadlock –There is possibility of deadlock in the priority inheritance protocol.

For example, there are two tasks T  and T . Suppose T  has the higher priority than T . T  starts running first and holds the critical resource
1 2 1 2 2

CR .After that, T  arrives and preempts T . T  holds critical resource CR  and also tries to hold CR  which is held by T . Now T  blocks and
2 1 2 1 1 2 2 1

T  inherits the priority of T  according to PIP. T  starts execution and now T  tries to hold CR  which is held by T . Thus, both T  and T  are
2 1 2 2 1 1 1 2

deadlocked.

 Chain Blocking –
When a task goes through priority inversion each time it needs a resource then this process is called chain blocking.

For example, there are two tasks T  and T . Suppose T  has the higher priority than T . T  holds the critical resource CR  and CR . T  arrives
1 2 1 2 2 1 2 1

and requests for CR . T  undergoes the priority inversion according to PIP.


1 2

Now, T  request CR , again T  goes for priority inversion according to PIP.


1 2 2

Hence, multiple priority inversion to hold the critical resource leads to chain blocking.
Embedded Operating System

• An embedded operating system is a computer operating system designed for use in embedded computer systems.

• These operating systems are designed to be small, resource-efficient, dependable, and reduce many features that
aren't required by specialized applications.

• The hardware that runs an embedded operating system is usually quite resource-constrained.

• Embedded hardware systems are typically quite specific, and it means that these systems are designed to cover
certain tasks due to limited resources.

What is Embedded Operating System?

• An embedded operating system is a computer operating system designed for use in embedded computer systems. It has limited features.

• The term "embedded operating system" also refers to a "real-time operating system". The main goal of designing an embedded 
operating system is to perform specified tasks for non-computer devices.

• It allows the executing programming codes that deliver access to devices to complete their jobs.

• An embedded operating system is a combination of software and hardware. It produces an easily understandable result by humans in many
formats such as images, text, and voice. Embedded operating systems are developed with programming code, which helps convert hardware
languages into software languages like C and C++.
Advantages of Embedded OS
There are various advantages of an embedded operating system. Some of them are as follows:

1. It is small in size and faster to load.


2. It is low cost.
3. It is easy to manage.
4. It provides better stability.
5. It provides higher reliability.
6. It provides some interconnections.
7. It has low power consumption.
8. It helps to increase the product quality.
Disadvantages
There are various disadvantages of an embedded operating system. Some of them are as follows:

1. It isn't easy to maintain.


2. The troubleshooting is harder.
3. It has limited resources for memory.
4. It isn't easy to take a back of embedded files.
5. You can't change, improve, or upgrade an embedded system once it's been developed.
6. If any problem occurs, you need to reset the setting.
7. Its hardware is limited.
Handheld Operating System

• Handheld operating systems are present in all handheld devices like Smartphones and tablets. It is also called a
Personal Digital Assistant. The popular handheld device in today’s market is android and iOS.

• These operating systems need a high processing processor and also embedded with different types of sensor.

You might also like