Professional Documents
Culture Documents
Unit3mes (AutoRecovered) (AutoRecovered)
Unit3mes (AutoRecovered) (AutoRecovered)
1. Real-Time Capability:
RTOS: The primary focus of an RTOS is to provide real-time
responsiveness. It guarantees that critical tasks meet specific
timing requirements and deadlines. RTOSes prioritize tasks
based on their urgency, ensuring that time-critical operations
are executed promptly and predictably.
OS: General-purpose operating systems, also known as desktop
or server operating systems, are not designed for real-time
applications. While they handle multiple tasks and processes
simultaneously, they do not provide real-time guarantees for
task execution, which makes them unsuitable for time-critical
applications.
2. Task Scheduling:
RTOS: In an RTOS, tasks are scheduled based on priority and
time constraints. Preemptive scheduling allows higher-priority
tasks to interrupt lower-priority tasks, ensuring timely execution
of critical tasks.
OS: General-purpose operating systems use preemptive or
non-preemptive scheduling based on time-sharing algorithms
to ensure fairness among tasks without guarantees on timely
execution.
3. Deterministic Timing:
RTOS: RTOSes offer deterministic timing, meaning that the time
taken to execute specific operations is predictable and
consistent. This is crucial for tasks requiring real-time
responsiveness and synchronization.
OS: General-purpose operating systems prioritize fairness and
efficiency over deterministic timing, which may lead to varying
execution times for tasks.
4. Resource Management:
RTOS: RTOSes are designed to efficiently manage limited
resources in embedded systems, such as memory, processor
time, and peripheral access, to meet real-time requirements.
OS: General-purpose operating systems are optimized for
desktop or server environments with ample resources, and their
resource management may not be as stringent as in RTOSes.
5. Complexity:
RTOS: RTOSes are often lightweight and tailored for specific
embedded applications. They have minimal overhead and are
optimized for real-time performance.
OS: General-purpose operating systems are more complex,
feature-rich, and designed to support a wide range of
applications and hardware configurations.
6. Use Cases:
RTOS: RTOSes are used in time-critical applications such as
automotive control systems, medical devices, robotics,
industrial automation, and aerospace systems.
OS: General-purpose operating systems are used in desktop
computers, laptops, servers, smartphones, and other
computing devices where real-time guarantees are not a
primary concern.
Initially
Process 1 is running. Process 1 is switched out and Process 2 is
switched in because of an interrupt or a system call. Context switching
involves saving the state of Process 1 into PCB1 and loading the state of
process 2 from PCB2. After some time again a context switch occurs and
Process 2 is switched out and Process 1 is switched in again. This involves
saving the state of Process 2 into PCB2 and loading the state of process 1
from PCB1.
There are three major triggers for context switching. These are given
as follows −
Multitasking: In a multitasking environment, a process is switched out of the
CPU so another process can be run. The state of the old process is saved and
the state of the new process is loaded. On a pre-emptive system, processes may
be switched out by the scheduler.
Interrupt Handling: The hardware switches a part of the context when an interrupt
occurs. This happens automatically. Only some of the context is changed to minimize
the time required to handle the interrupt.
User and Kernel Mode Switching: A context switch may take place when a
transition between the user mode and kernel mode is required in the operating system.
1. Saving the Current Context: When a context switch is triggered, the operating
system saves the current context of the currently running process or thread.
This includes saving the program counter, registers, stack pointer, and other
relevant information. The context is typically saved in the process control
block (PCB) or thread control block (TCB) associated with the process or
thread.
2. Selecting the Next Process/Thread: The operating system determines which
process or thread should be scheduled next for execution. This decision is
typically based on scheduling algorithms, priorities, and other factors. The
scheduler selects the next process or thread from the ready queue, which
holds the processes or threads that are waiting to be executed.
3. Loading the Context of the Next Process/Thread: The operating system loads
the saved context of the selected process or thread from its PCB or TCB. This
involves restoring the program counter, registers, stack pointer, and other
relevant information. The system prepares the CPU to execute the instructions
of the selected process or thread.
4. Updating Data Structures: As part of the context switch, the operating system
updates various data structures. This includes updating the state of the
previously running process or thread, such as marking it as waiting or ready,
and updating any scheduling-related data structures.
5. Executing the Next Process/Thread: Once the context of the next process or
thread is loaded, the CPU starts executing its instructions. The operating
system transfers control to the newly loaded context, and the process or
thread resumes execution from the point where it was previously interrupted.
6. Repeat the Process: The steps mentioned above are repeated whenever a
context switch is required, allowing the operating system to switch execution
between different processes or threads based on scheduling decisions and
system events.
Running
Ready
Ready tasks are those that are able to execute (they are not in the Blocked or
Suspended state) but are not currently executing because a different task of
equal or higher priority is already in the Running state.
Blocked
Tasks in the Blocked state do not use any processing time and cannot be
selected to enter the Running state.
Suspended
Like tasks that are in the Blocked state, tasks in the Suspended state cannot
be selected to enter the Running state, but tasks in the Suspended state do
not have a time out. Instead, tasks only enter or exit the Suspended state
when explicitly commanded to do so through the vTaskSuspend() and
xTaskResume() API calls respectively.
Task scheduling algorithms :
The process of deciding which task will utilize the cpu time is called task scheduling. The
scheduling of the task may be on the basis of their priorities. The priority assignment
mechanism for the tasks can be either static or dynamic. In the case of static priority
assignment the priority of task is assigned as soon as the task is created, thereafter the
priorities cannot be changed. In dynamic assigning the priorities of the task can be
changed during the runtime.
The various scheduling algorithms followed are:
1. First in first out (FIFO): In this algorithm simply the task that came in first in the ready
to run state will go out first into the running state. The task that goes out of the running
state goes to the ready to run state. Fig 11.5.a shows the movement of the task in the
ready to run and running state
Advantage – it is very easy to implement Disadvantage – we cannot have any priority
mechanisms, in real time examples each task has a different priority and it is to be
implemented, but using FIFO we cannot implement priority based scheduling.
2. Round robin scheduling: In this case each task gets their turn after all the task are
given their time slots. Thus it can be said that it is a time slicing wherein the time for each
time slot is same and is given tasks one by one. The CPU time utilization for three tasks
according to round robin is shown in fig 11.5.b. In this example, it is assumed that the time
slots are 5 milliseconds each.
The switching of tasks occurs in the following case:
1. current task has completed its work before the completion of its time slot
2. current task has no work to be done
3. current task has completed the time slice allocated to it
● advantage – very easy to implement
● disadvantage – all the tasks are considered at same level
3. Round robin scheduling with priority :
It is modified version of round robin scheduling mechanism
In this case the tasks are given priorities based on their significance and deadlines.
Thus task with higher priority can interrupt the processor and utilize the CPU time
If multiple tasks have same priority then round robin scheduling will be used for
them. But whenever a higher priority task occurs, it will be executed first. The CPU
will suspend the task it was executing and will execute higher priority task.
For example, bar code scanner can use this scheduling algorithm. This method can
be used in soft real time systems.
Multilevel Queue Scheduling: Processes are divided into multiple queues, and each
queue has a different priority level. Each queue can use its own scheduling algorithm, such as
FCFS, SJN, or RR. Processes move between queues based on their priority or other criteria.
rate monotonic analysis
It's important to note that RMA assumes that the tasks are independent,
periodic, and have fixed deadlines. It provides a conservative analysis by
assuming worst-case scenarios, and it does not consider factors such as
task preemption, task dependencies, or resource sharing.
1. Task Creation:
task_create: Creates a new task or thread with specified attributes.
task_spawn: Similar to task_create , it creates a new task and starts its
execution immediately.
pthread_create : Creates a new POSIX thread.
2. Task Termination:
task_exit : Terminates the currently executing task or thread.
task_kill: Terminates a specific task or thread.
pthread_exit : Terminates the calling POSIX thread.
3. Task Scheduling:
task_yield : Relinquishes the CPU to allow other tasks to execute.
task_suspend : Suspends the execution of a task temporarily.
task_resume : Resumes the execution of a previously suspended task.
4. Task Synchronization:
task_mutex_lock: Acquires a mutex lock, allowing exclusive access
to a shared resource.
task_mutex_unlock : Releases a mutex lock, allowing other tasks to
acquire it.
task_semaphore_wait : Waits until a semaphore becomes available.
task_semaphore_signal : Signals or releases a semaphore, allowing
other tasks to proceed.
5. Task Communication:
task_queue_send : Sends data or a message to a task's message queue.
task_queue_receive : Receives data or a message from a task's message
queue.
task_event_wait : Waits for a specific event or flag to be set by
another task.
It's important to note that ISRs are written specifically for the hardware and
software environment of the target system. The implementation and
specific details of ISRs may vary based on the architecture, operating
system, and programming language being used.
1. Multilevel Queue Scheduling: Processes are divided into multiple queues, and
each queue has a different priority level. Each queue can use its own scheduling
algorithm, such as FCFS, SJN, or RR. Processes move between queues based on their
priority or other criteria.
1. Scenario:
Three tasks: High-priority (H), Medium-priority (M), and Low-
priority (L).
H requires a resource that is currently held by L.
M has a priority lower than H but higher than L.
2. Priority Inheritance:
Priority inheritance is a technique used to address the priority
inversion problem.
When H needs the resource held by L, H inherits the priority of
L, effectively elevating its priority temporarily to prevent
blocking by lower-priority tasks.
3. Priority Inversion:
Without priority inheritance, the following scenario can lead to
priority inversion:
H starts executing and requires the resource held by L.
L is currently executing and has a lower priority than H,
so H is blocked waiting for the resource to be released.
M, which has a priority higher than L, starts executing and
keeps running.
L continues executing, unaware of H waiting for the
resource.
This leads to a situation where a higher-priority task (H) is
blocked by a lower-priority task (L), causing a priority
inversion.
4. Impact:
The priority inversion problem can result in performance
degradation and violation of system requirements.
High-priority tasks may experience unexpected delays,
impacting their timeliness and ability to respond promptly.
In worst-case scenarios, the system may enter a deadlock state
if the lower-priority task never releases the resource.
5. Solutions:
Priority Inheritance Protocol: The priority inheritance protocol is
a technique used to prevent priority inversion. When a higher-
priority task is blocked by a lower-priority task holding a shared
resource, the lower-priority task temporarily inherits the priority
of the higher-priority task until it releases the resource.
Priority Ceiling Protocol: The priority ceiling protocol is another
solution to the priority inversion problem. It assigns a "ceiling
priority" to each shared resource, and a task that requires that
resource is temporarily elevated to the ceiling priority,
preventing lower-priority tasks from blocking it.
1. Code Space: Code space refers to the memory area where the program
instructions are stored. It is the region of memory that holds the executable
code of a program. When a program is compiled or interpreted, the resulting
binary code is loaded into the code space.
The code space typically contains instructions such as machine code or bytecode,
which the computer's processor can directly execute. It includes functions, methods,
classes, and other program structures necessary for program execution.
2. Data space, on the other hand, refers to the memory area where the
program's data is stored. It is used to store variables, objects, arrays, and other
data structures that the program manipulates during its execution.
The data space is separate from the code space and is typically dynamically allocated
and deallocated as needed during the program's execution. It includes variables and
data structures that hold input, intermediate results, and output generated by the
program.
In summary, code space is where the program instructions reside, while data space is
where the program's data is stored and manipulated. They represent distinct areas of
memory within a computer system and serve different purposes in program
execution.
EMEDDED SYSYTEM
can be used to perform a single task or more than one task at the
You can pick either one based on your requirements and application.
To work the embedded system properly, a smooth and efficient power
supply is needed. Both wall adopter and battery can be used as a power
supply. Some power supplies work as independent equipment while others
are incorporated into the embedded technology they power.
2. Micro
3. micontroller
system.
bit processors.
RAM/ROM
RAM, which stands for random access memory, and ROM,
which stands for read-only memory, are both present in your
computer.
RAM is volatile memory that temporarily stores the files you are
working on. ROM is non-volatile memory that permanently
stores instructions for your computer.
RAM is volatile memory, which means that the information temporarily
stored in the module is erased when you restart or shut down your computer.
Because the information is stored electrically on transistors, when there is
no electric current, the data disappears. Each time you request a file or
information, it is retrieved either from the computer's storage disk or the
internet. The data is stored in RAM, so each time you switch from one
program or page to another, the information is instantly available. When the
computer is shut down, the memory is cleared until the process begins
again. Volatile memory can be changed, upgraded, or expanded easily by
users.
Timers / Counters
The editor is the first tool you required for embedded system software.
The code you write in C and C++ programming languages will be saved
The name ‘compiler’ is mainly used for the written programs that convert
high-level programming language source code into a low-level
programming language.
Debugger
A debugger is a tool used for testing and debugging purposes. It scans the
code thoroughly and removes the errors and bugs, and identifies the
places where they occur.
UNIT1
Digital camera is the best example for an
embedded system. It has lot of components
embedded in it. Let’s look into some of the
components in detail...
CCD (Charge couple device): Contains an array of
light sensitive photocells that capture Image.
A2D - Analog images to digital conversion happen here
D2A - Digital images to analog conversion will be done here
CCD Preprocessor - Commands CCD to read image
JPEG Codec - Compresses and decompresses the
image using JPEG Compression standard
Pixel Coprocessor - for rapid Display of an image
Memory Controller - Controls access to memory chip found in
the camera
DMA controller - Enables direct memory accessby other devices
while the Microcontroller is performing other functions
UART - Enables the communication with PC’s serial PORT
ISA Bus Interface - Enables with faster connection
with PC’s ISA Bus
LCD Control & Display Control - Controls thedisplay on the
camera LCD Display
Multiplier/Accumulator - Performs a particular
frequently executed computation faster than the µc
couldAnd finally Microcontroller will play the main role is heart
Application layer
Middleware layer
Firmware layer
Application layer is mostly written in high level languages like java, C++, C# with rich
GUI support. Application layer calls the middleware api in response to action by the
user or an event.
The Middleware layer is mostly written in C++, C with no rich GUI support. The
middleware software maintains the state machine of the device. And is responsible to
handle requests from the upper layer and the lower level layer. The middleware
exposes a set of api functions which the application must call in order to use the
services offered by the middleware. And vice versa the middleware can send data to
the application layer via IPC mechanism.
The Firmware layer is always written in C. The firmware is responsible for talking to
the chipset either configuring registers or reading from the chipset registers. The
firmware exposes a set of api’s that the middleware can call. In order to perform
specific tasks.
We come across several technically advanced electronic devices in our daily life. Most
of the devices are installed with embedded software systems. Embedded software is
a combination of all the 3 layers mentioned above. It is created to perform some
tasks or to behave in a predefined way. Most firms or companies maintain 3 layer
embedded software architecture for their projects.
ICD IDE
1. Purpose:
ICD is a hardware tool used for debugging and testing
embedded software directly on the target hardware. It allows
developers to interact with the microcontroller or processor on
the actual embedded system.
It provides real-time debugging capabilities, allowing
developers to pause the processor, inspect memory and
registers, set breakpoints, and step through code execution on
the physical target.
2. Functionality:
ICD focuses on low-level debugging operations and provides
direct access to the hardware resources of the embedded
system.
It can be used to analyze the behavior of the code in real-time,
helping developers identify bugs, logic errors, and performance
issues specific to the target environment.
3. Usage:
ICD is primarily used during the later stages of embedded
software development when the code is being executed on the
actual hardware.
It is especially valuable for debugging complex issues that may
only manifest in the specific hardware configuration of the
target system.
1. Purpose:
IDE is a software application that provides a centralized and
user-friendly environment for writing, editing, compiling, and
debugging code.
It serves as a complete software development platform,
offering a range of tools and features to streamline the
development workflow.
2. Functionality:
IDE offers a source code editor with features like syntax
highlighting, code completion, and code navigation, making it
easier for developers to write and manage code.
It includes a compiler or build system that translates the high-
level code into machine-readable instructions for the target
platform.
IDEs often integrate a debugger that allows developers to set
breakpoints, inspect variables, and step through code execution
while running in a simulated or emulated environment.
3. Usage:
IDE is used throughout the entire embedded software
development process, from writing and testing code to
compiling and deploying it onto the target hardware.
It is especially valuable during the early stages of development
when code is written and tested in a simulated or emulated
environment before being deployed to the physical target.