You are on page 1of 17

CS3691 EMBEDDED SYSTEMS AND IOT

Part A

1.What is embedded C programming?

* Embedded C is most popular programming language in software field for developing


electronic gadgets. Each processor used in electronic system is associated with
embedded software.

* Embedded C programming plays a key role in performing specific function by the


processor. In day-to-day life we used many electronic devices such as mobile phone,
washing machine, digital camera, etc. These all device working is based on
microcontroller that are programmed by embedded C.

2.Define Memory Device Interfacing ?


* When we are executing any instruction, the address of memory location or an I/O device
is sent out by the microprocessor. The corresponding memory chip or I/O device is
selected by a decoding circuit.
* Memory requires some signals to read from and write to registers and microprocessor
transmits some signals for reading or writing data.
* The interfacing process includes matching the memory requirements with the microprocessor
signals. Therefore, the interfacing circuit should be designed in such a way that it matches the
memory signal requirements with the microprocessor's signals.

3. What is key bouncing and debouncing?

• Bouncing can occur as the contacts physically disconnect, causing momentary interruptions in the
circuit.
• The process of removing the bounces, of converting the brutish realities of the analog world into
pristine ones and zeros.

4. Define RTOS?

* RTOS ia a type of Operating System used for real time application


* It is used for embedded applications like telephone, industrial robots and home applications

5. Define Kernel?

• A kernel is a central component of an operating system.


• It acts as an interface between the user applications and the hardware.
• The aim of the kernel is to manage the communication between the software (user level applications)
and the hardware (CPU, disk memory etc)

Part B

6. Explain in detail about earliest deadline first scheduling ?

* Earliest Deadline First (EDF) is an optimal dynamic priority scheduling algorithm used in real-
time systems.
* It can be used for both static and dynamic real-time scheduling.
* EDF uses priorities to the jobs for scheduling. It assigns priorities to the task according to the
absolute deadline. The task whose deadline is closest gets the highest priority. The priorities are
assigned and changed in a dynamic fashion. EDF is very efficient as compared to other scheduling
algorithms in real-time systems. It can make the CPU utilization to about 100% while still
guaranteeing the deadlines of all the tasks.
* EDF includes the kernel overload. In EDF, if the CPU usage is less than 100%, then it means that all
the tasks have met the deadline. EDF finds an optimal feasible schedule. The feasible schedule is one
in which all the tasks in the system are executed within the deadline. If EDF is not able to find a
feasible schedule for all the tasks in the real-time system, then it means that no other task scheduling
algorithms in real-time systems can give a feasible schedule. All the tasks which are ready for
execution should announce their deadline to EDF when the task becomes runnable.
* EDF scheduling algorithm does not need the tasks or processes to be periodic and also the tasks or
processes require a fixed CPU burst time. In EDF, any executing task can be preempted if any other
periodic instance with an earlier deadline is ready for execution and becomes active. Preemption is
allowed in the Earliest Deadline First scheduling algorithm.

Working Process:
* The first step is to initialize the available tasks. Additionally, along with initialization, we assign
each task a deadline based on completion requirements. The next step is to assign priority to each
task. The system sorts the tasks in order of their deadlines, with the task having the earliest deadline
assigned the highest priority.
* Furthermore, the system selects the task with the earliest deadline for execution. If multiple tasks
have the same deadline, we select the task with the highest priority.
* Finally, the CPU executes the selected task until completed or the deadline is reached. If the task is
completed before its deadline, the system returns to the task prioritization step to select the next task
for execution. If the deadline is reached, the task is considered to be missed.
* Now before the algorithm terminates, we need to check two conditions. First, we need to check
whether all the tasks are executed. Additionally, we check if all deadlines have been missed. If one
of these conditions is met, we need to go back to the task prioritization step and repeat the steps until
the algorithm terminates.

EDF is a dynamic scheduling algorithm where the task priority can change over time as deadlines approach.
Hence, the algorithm continually updates the priority of tasks based on their deadlines and selects the task
with the earliest deadline for execution. Additionally, this helps to ensure that deadlines are met, and tasks
are completed in a timely manner.

Example:

Consider two processes P1 and P2.

Let the period of P1 be p1 = 50

Let the processing time of P1 be t1 = 25


Let the period of P2 be period2 = 75

Let the processing time of P2 be t2 = 30

Steps for solution:

1. Deadline pf P1 is earlier, so priority of P1>P2.

2. Initially P1 runs and completes its execution of 25 time.

3. After 25 times, P2 starts to execute until 50 times, when P1 is able to execute.

4. Now, comparing the deadline of (P1, P2) = (100, 75), P2 continues to execute.

5. P2 completes its processing at time 55.

6. P1 starts to execute until time 75, when P2 is able to execute.

7. Now, again comparing the deadline of (P1, P2) = (100, 150), P1 continues to execute.

8. Repeat the above steps…

9. Finally at time 150, both P1 and P2 have the same deadline, so P2 will continue to execute till its
processing time after which P1 starts to execute.

Advantages of the EDF scheduling algorithm:

Meeting Deadlines: EDF ensures that tasks with the earliest deadlines are executed first. By prioritizing
tasks based on their deadlines, EDF minimizes the chances of missing deadlines and helps meet real-time
requirements.
Optimal Utilization: EDF maximizes CPU utilization by allowing tasks to execute as soon as their
deadlines arrive, as long as the CPU is available. It optimizes the use of system resources by minimizing idle
time.

Responsiveness: EDF provides a high level of responsiveness for time-critical tasks. It ensures that tasks are
scheduled and executed promptly, reducing response times and improving system performance.

Predictability: EDF provides predictability in terms of task execution times and deadlines. The scheduling
decisions are deterministic and can be analyzed and predicted in advance, which is crucial for real-time
systems.

Flexibility: EDF can handle both periodic and aperiodic tasks, making it suitable for a wide range of real-
time systems. It allows for dynamic task creation and scheduling without disrupting the execution of existing
tasks.

Limitations of EDF scheduling algorithm:

• Transient Overload Problem

• Resource Sharing Problem

• Efficient Implementation Problem

7. Explain the context switching mechanism for moving the CPU from one executing process to
another with an example?

* The Context switching is a technique or method used by the operating system to switch
a process from one state to another to execute its function using CPUs in the system.
When switching perform in the system, it stores the old running process's status in the
form of registers and assigns the CPU to a new process to execute its tasks. While a new
process is running in the system, the previous process must wait in a ready queue. The
execution of the old process starts at that point where another process stopped it. It
defines the characteristics of a multitasking operating system in which multiple
processes shared the same CPU to perform multiple tasks without the need for additional
processors in the system.
The need for Context switching
* A context switching helps to share a single CPU across all processes to complete its
execution and store the system's tasks status. When the process reloads in the system,
the execution of the process starts at the same point where there is conflicting.
* Following are the reasons that describe the need for context switching in the Operating
system.
* The switching of one process to another process is not directly in the system. A context
switching helps the operating system that switches between the multiple processes to use
the CPU's resource to accomplish its tasks and store its context. We can resume the
service of the process at the same point later. If we do not store the currently running
process's data or context, the stored data may be lost while switching between processes.
* If a high priority process falls into the ready queue, the currently running process will be
shut down or stopped by a high priority process to complete its tasks in the system.
* If any running process requires I/O resources in the system, the current process will be
switched by another process to use the CPUs. And when the I/O requirement is met, the
old process goes into a ready state to wait for its execution in the CPU. Context
switching stores the state of the process to resume its tasks in an operating system.
Otherwise, the process needs to restart its execution from the initials level.
* If any interrupts occur while running a process in the operating system, the process
status is saved as registers using context switching. After resolving the interrupts, the
process switches from a wait state to a ready state to resume its execution at the same
point later, where the operating system interrupted occurs.
* A context switching allows a single CPU to handle multiple process requests
simultaneously without the need for any additional processors.
Example of Context Switching
* Suppose that multiple processes are stored in a Process Control Block (PCB). One
process is running state to execute its task with the use of CPUs. As the process is
running, another process arrives in the ready queue, which has a high priority of
completing its task using CPU. Here we used context switching that switches the current
process with the new process requiring the CPU to finish its tasks. While switching the
process, a context switch saves the status of the old process in registers. When the
process reloads into the CPU, it starts the execution of the process when the new process
stops the old process. If we do not save the state of the process, we have to start its
execution at the initial level. In this way, context switching helps the operating system to
switch between the processes, store or reload the process when it requires executing its
tasks.
Context switching triggers
Following are the three types of context switching triggers as follows.
o Interrupts
o Multitasking
o Kernel/User switch

* Interrupts: A CPU requests for the data to read from a disk, and if there are any
interrupts, the context switching automatic switches a part of the hardware that requires
less time to handle the interrupts.
* Multitasking: A context switching is the characteristic of multitasking that allows the
process to be switched from the CPU so that another process can be run. When
switching the process, the old state is saved to resume the process's execution at the
same point in the system.
* Kernel/User Switch: It is used in the operating systems when switching between the
user mode, and the kernel/user mode is performed.

PCB
A PCB (Process Control Block) is a data structure used in the operating system to store all data
related information to the process. For example, when a process is created in the operating
system, updated information of the process, switching information of the process, terminated
process in the PCB.
Steps for Context Switching
* There are several steps involves in context switching of the processes. The following
diagram represents the context switching of two processes, P1 to P2, when an interrupt,
I/O needs, or priority-based process occurs in the ready queue of PCB.
* As we can see in the diagram, initially, the P1 process is running on the CPU to execute
its task, and at the same time, another process, P2, is in the ready state. If an error or
interruption has occurred or the process requires input/output, the P1 process switches
its state from running to the waiting state. Before changing the state of the process P1,
context switching saves the context of the process P1 in the form of registers and the
program counter to the PCB1. After that, it loads the state of the P2 process from the
ready state of the PCB2 to the running state.
The following steps are taken when switching Process P1 to Process 2:
* First, thes context switching needs to save the state of process P1 in the form of the
program counter and the registers to the PCB (Program Counter Block), which is in the
running state.
* Now update PCB1 to process P1 and moves the process to the appropriate queue, such
as the ready queue, I/O queue and waiting queue.
* After that, another process gets into the running state, or we can select a new process
from the ready state, which is to be executed, or the process has a high priority to
execute its task.
* Now, we have to update the PCB (Process Control Block) for the selected process P2. It
includes switching the process state from ready to running state or from another state
like blocked, exit, or suspend.
* If the CPU already executes process P2, we need to get the status of process P2 to
resume its execution at the same time point where the system interrupt occurs.
Similarly, process P2 is switched off from the CPU so that the process P1 can resume
execution. P1 process is reloaded from PCB1 to the running state to resume its task at the same
point. Otherwise, the information is lost, and when the process is executed again, it starts
execution at the initial level.

8. Describe the programming embedded system in C?


* Embedded C is most popular programming language in software field for developing
electronic gadgets. Each processor used in electronic system is associated with
embedded software.

* Embedded C programming plays a key role in performing specific function by the


processor. In day-to-day life we used many electronic devices such as mobile phone,
washing machine, digital camera, etc. These all device working is based on
microcontroller that are programmed by embedded C.

In embedded system programming C code is preferred over other language. Due to the
following reasons:

o Easy to understand
o High Reliability
o Portability
o Scalability

Embedded System Programming:

Basic Declaration

Let's see the block diagram of Embedded C Programming development:

* Function is a collection of statements that is used for performing a specific task and a
collection of one or more functions is called a programming language. Every
language is consisting of basic elements and grammatical rules. The C language
programming is designed for function with variables, character set, data types,
keywords, expression and so on are used for writing a C program.

* The extension in C language is known as embedded C programming language. As


compared to above the embedded programming in C is also have some additional
features like data types, keywords and header file etc is represented by,

#include<microcontroller name.h>

Embedded System:
* An Embedded System is a combination of Hardware and Software. My desktop
computer also has hardware and software. Does that mean a desktop computer is also an
Embedded System? NO. A desktop computer is considered a general purpose system
as it can do many different tasks that too simultaneously. Some common tasks are
playing videos, working on office suites, editing images (or videos), browsing the web,
etc.

* An Embedded System is more of an application oriented system i.e. it is dedicated to


perform a single task (or a limited number of tasks, but all working for a single main
aim).

* An example for embedded system, which we use daily, is a Wireless Router. In order to
get wireless internet connectivity on our mobile phones and laptops, we often use
routers. The task of a wireless router is to take the signal from a cable and transmit it
wirelessly. And take wireless data from the device (like a mobile) and send it through the
cable.

Programming Embedded Systems:


* As mentioned earlier, Embedded Systems consists of both Hardware and Software.
If we consider a simple Embedded System, the main Hardware Module is the
Processor. The Processor is the heart of the Embedded System and it can be
anything like a Microprocessor, Microcontroller, DSP, CPLD (Complex
Programmable Logic Device) or an FPGA (Field Programmable Gated Array).

* All these devices have one thing in common: they are programmable i.e., we can
write a program (which is the software part of the Embedded System) to define how
the device actually works.
Components of Embedded System
An Embedded System consists of four main components. They are the Processor
(Microprocessor or Microcontroller), Memory (RAM and ROM), Peripherals (Input and
Output) and Software (main program).

Processor: The heart of an Embedded System is the Processor. Based on the functionality of
the system, the processor can be anything like a General Purpose Processor, a single
purpose processor, an Application Specific Processor, a microcontroller or an FPGA.
Memory: Memory is another important part of an embedded system. It is divided in to RAM andROM.
Memory in an Embedded System (ROM to be specific) stores the main program and RAM stores the
program variables and temporary data.

Peripherals: In order to communicate with the outside world or control the external devices, an
Embedded System must have Input and Output Peripherals. Some of these peripherals include
Input / Output Ports, Communication Interfaces, Timers and Counters, etc.

Software: All the hardware work according to the software (main program) written.
Software part of an Embedded System includes initialization of the system, controlling inputs
and outputs,error handling etc.

Real Time Applications of embedded system

1.Industrial Robots

2.GPS Receivers

3.Digital Cameras

4.Wireless Router

5.Gaming Consoles

Part C
Explain the Priority Based Scheduling Policies?
* Priority Scheduling is a method of scheduling processes that is based on priority. In
thisalgorithm, the scheduler selects the tasks to work as per the priority.
* The processes with higher priority should be carried out first, whereas jobs with equal
priorities are carried out on a round-robin or FCFS basis. Priority depends upon memory
requirements, time requirements, etc.
Types of Priority Scheduling:
Priority scheduling divided into two main types,
1.Preemptive Scheduling
2.Non-Preemptive Scheduling
i)Preemptive Scheduling,
In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes it is
important to run a task with a higher priority before another lower priority task, even if the
lower priority task is still running. The lower priority task holds for some time and resumes
when the higher priority task finishes its execution.

ii)Non-Preemptive Scheduling,
In this type of scheduling method, the CPU has been allocated to a specific process. The process
that keeps the CPU busy, will release the CPU either by switching context or terminating. It is
the only method that can be used for various hardware platforms. That’s because it doesn’t need
special hardware (for example, a timer) like preemptive scheduling.

Scheduling policies:

Earliest Deadline First Scheduling,

• Earliest-Deadline-First Scheduling Earliest Deadline First (EDF) is one of the best


known algorithms for real time processing. It is an optimal dynamic algorithm. In
dynamic priority algorithms, the priority of a task can change during its execution. It
produces a valid schedule whenever one exists.
• EDF is a preemptive scheduling algorithm that dispatches the process with the earliest
deadline. If an arriving process has an earlier deadline than the running process, the
system preempts the running process and dispatches the arriving process. A task with a
shorter deadline has a higher priority.
• It executes a job with the earliest deadline. Tasks cannot be scheduled by rate monotonic
algorithm. EDF is optimal among all scheduling algorithms not keeping the processor
idle at certain times. Upper bound of process utilization is 100 %. Whenever a new task
arrive, sort the ready queue so that the task closest to the end of its period assigned the
highest priority.
• System preempt the running task if it is not placed in the first of the queue in the last
sorting. If two tasks have the same absolute deadlines, choose one of the two at random
(ties can be broken arbitrarily). The priority is dynamic since it changes for different
jobs of the same task.
• EDF can also be applied to aperiodic task sets. Its optimality guarantees that the
maximal lateness is minimized when EDF is applied. Many real time systems do not
provide hardware preemption, so other algorithm must be employed. In scheduling
theory, a real-time system comprises a set of real-time tasks; each task consists of an
infinite or finite stream of jobs. The task set can be scheduled by a number of policies
including fixed priority or dynamic priority algorithm.
• The success of a real-time system depends on whether all the jobs of all the tasks can be
guaranteed to complete their executions before their deadlines. If they can, then we say
the task set is schedulable. The schedulability condition is that the total utilization of the
task set must be less than or equal to 1.
Implementation of earliest deadline first : Is it really not feasible to implement EDF
scheduling ?

Task Arrival Duration Deadline


T1 0 10 33
T2 4 3 28
T3 5 10 29

Problems for implementations :


1. Absolute deadlines change for each new task instance, therefore the priorityneeds to
be updated every time the task moves back to the ready queue.
2. More important, absolute deadlines are always increasing, how can weassociate a
finite priority value to an ever increasing deadline value.
3. Most important, absolute deadlines are impossible to compute a-priori.

EDF properties :
1. EDF is optimal with respect to feasibility (i.e. schedulability).
2. EDF is optimal with respect to minimizing the maximum lateness.
Advantages
1. It is optimal algorithm.
2. Periodic, aperiodic and sporadic tasks are scheduled using EDF algorithm.
3. Gives best CPU utilization.
Disadvantages
1. Needs priority queue for storing deadlines
2. Needs dynamic priorities
3. Typically no OS support
4. Behaves badly under overload
Difficult to implement.

Rate Monotonic Scheduling


• Rate Monotonic Priority Assignment (RM) is a so called static priority round robin scheduling
algorithm.
• In this algorithm, priority is increases with the rate at which a process mustbe scheduled. The process
of lowest period will get the highest priority.
• The priorities are assigned to tasks before execution and do not change over time. RM scheduling is
preemptive, i.e., a task can be preempted by a task with higher priority.
• In RM algorithms, the assigned priority is never modified during runtime of the system. RM assigns
priorities simply in accordance with its periods, i.e. the priority is as higher as shorter is the period
which means as higher is the activationrate. So RM is a scheduling algorithm for periodic task sets.
If a lower priority process is running and a higher priority process becomes available to run, it will preempt
the lower priority process. Each periodic task is assigned a priority inversely based on its period :

1. The shorter the period, the higher the priority.


2. The longer the period, the lower the priority.
The algorithm was proven under the following assumptions :
1. Tasks are periodic.
2. Each task must be completed before the next request occurs.
3. All tasks are independent.
4. Run time of each task request is constant.
5. Any non-periodic task in the system has no required deadlines.

Advantages :
1. Simple to understand. 2. Easy to implement. 3. Stable algorithm.
Disadvantages :
1. Lower CPU utilization.
2. Only deal with independent tasks.
3. Non-precise schedulability analysis

Comparison between RMS and EDF

Parameters RMS EDF


Priorities Static Dynamic
Works with OS with fixed priorities Yes No
Uses full computational power of processor No Yes
Possible to exploit full computational power of No Yes
Processor without provisioning for slack

Priority Inversion

• Priority inversion occurs when a low-priority job executes while some ready higher-priority job
waits.
Consider three tasks Tl, T2 and T3 with decreasing priorities. Task T1 and T3 share some data or resource
that requires exclusive access, while T2 does notinteract with either of the other two tasks.

Task T3 starts at time t0 and locks semaphore s at time tv At time t2, Tl arrives and preempts T3 inside its
critical section. After a while, Tl requests to use the shared resource by attempting to lock s, but it gets
blocked, as T3 is currently using it. Hence, at time t3 continues to execute inside its critical section. Next,
when T2 arrives at time t4, it preempts T3, as it has a higher priority and does not interact with either Tl or
T3.
The execution time of T2 increases the blocking time of Tl, as it is no longer dependent solely on the length
of the critical section executed by T3.
When tasks share resources, there may be priority inversions.

Priority inversion is not avoidable; However, in some cases, the priorityinversion could be too large.
Simple solutions :
1. Make critical sections non-preemptable.
2. Execute critical sections at the highest priority of the task that could use it.
Timing anomalies
As seen, contention for resources can cause timing anomalies due to priority inversion and deadlock.
Unless controlled, these anomalies can be arbitrary duration, and can seriously disrupt system timing.
It cannot eliminate these anomalies, but several protocols exist to control
them :

1. Priority inheritance protocol


2. Basic priority ceiling protocol
3. Stack-based priority ceiling protocol
Wait for graph
• Wait-for graph is used for representing dynamic-blocking relationship among jobs. In the wait-for
graph of a system, every job that requires some resource is represented by a vertex labeled by the
name of the job.
• At any time, the wait-for graph contains an (ownership) edge with label x from a resource vertex to
a job vertex if x units of the resource are allocated to the job at the time.
• Wait-for-graph is used to model resource contention. Every serial reusable resource is modeled.
Every job which requires a resource is modeled by vertex with arrow pointing towards the resource.

Every job holding a resource is represented by a vertex pointing away from the resource and towards the job.
A cyclic path in a wait-for-graph indicates deadlock.

You might also like