You are on page 1of 7

The main reason for the development of microcontroller is to overcome the only

drawback of the microprocessor. Even though microprocessors are powerful


devices, they require external chips like RAM, ROM, Input / Output ports and
other components in order to design a complete working system. This made it
economically difficult to develop computerized consumer appliances on a large
scale as the system cost is very high. Microcontrollers are the devices that
actually fit the profile “Computer – on – a chip” as it consists of a main processing
unit or processor along with some other components that are necessary to make
it a complete computer. The components that are present on a typical
microcontroller IC are CPU, memory, input / output ports and timers.

A Microprocessor, popularly known as “computer on a chip” in its early days, is a general


purpose central processing unit (CPU) fabricated on a single integrated circuit (IC) and is a
complete digital computer (later microcontroller is considered to be more accurate form of
complete computer). It is a small but very powerful electronic brain that operates at a blistering
speed and is often used to carry out instructions of a computer program in order to perform
arithmetic and logical operations, storing the data, system control, input / output operations etc.
as per the instructions. The key term in the definition of a microprocessor is “general purpose”.

It means that, with the help of a microprocessor, one can build a simple system or a large and
complex machine around it with a few extra components as per the application. The main task
of a microprocessor is to accept data as input from input devices then process this data
according to the instructions and provide the result of these instructions as output through
output devices. Microprocessor is an example of sequential logic device as it has memory
internally and uses it to store instructions.

There are five important components in a microprocessor. They are Arithmetic


and Logic Unit (ALU), Control Unit, Registers,

A microcomputer can be defined as a small sized, inexpensive, and limited


capability computer. It has the same architectural block structure that is present
on a computer. Present-day microcomputers are having smaller sizes.
Nowadays, they are of the size of a notebook. So a microcomputer is a small,
relatively inexpensive computer with a microprocessor as its central processing
unit (CPU). A microcomputer consists of microprocessor, memory, input device
and output device.
Embedded systems are computer systems designed to perform specific functions
within a larger system or device. These systems often have limited resources, including
processing power, memory, and energy. Multitasking in embedded systems allows
efficient utilization of these resources by enabling concurrent execution of multiple
tasks or processes.

Multitasking in embedded systems

Multitasking in embedded systems refers to the ability of an embedded system to execute


multiple tasks concurrently or in a seemingly simultaneous manner. It allows the system to
handle multiple processes, threads, or tasks, which may have different priorities, deadlines, and
resource requirements. Multitasking enables efficient utilization of system resources and can
improve overall system responsiveness.

There are generally two approaches to implementing multitasking in embedded systems:


cooperative multitasking and preemptive multitasking.

Cooperative Multitasking:

1. Cooperative multitasking relies on the cooperation of individual tasks to allocate CPU


time among themselves. Each task is responsible for voluntarily yielding control to other
tasks at specific points in its execution. This can be done by calling a scheduling
function or following a predefined cooperative multitasking framework. When a task
yields, it allows another task to execute.

Advantages of cooperative multitasking:

● Simplicity: Cooperative multitasking is relatively simple to implement, as it doesn't


require complex scheduling algorithms or hardware support.
● Deterministic behavior: Since tasks explicitly yield control, their execution order and
timing can be precisely controlled.

Disadvantages of cooperative multitasking:

● Lack of fairness: If a task does not yield appropriately, it can monopolize the CPU and
starve other tasks, causing the system to become unresponsive.
● Responsiveness: If a task fails to yield promptly, the system may experience delays and
reduced responsiveness.
● Error-prone: Cooperative multitasking relies on the cooperation of tasks, making it
susceptible to bugs and programming errors that can affect the entire system.
Cooperative multitasking is commonly used in small-scale embedded systems with simple
requirements, where task cooperation can be ensured and strict timing guarantees are not
necessary.

2. Preemptive Multitasking: Preemptive multitasking involves the use of an operating


system or real-time kernel to manage task scheduling and resource allocation. The
operating system divides CPU time among multiple tasks based on their priorities and
scheduling policies. When a higher-priority task becomes eligible for execution, the
operating system interrupts the currently running lower-priority task and switches to the
higher-priority task.

Advantages of preemptive multitasking:

● Priority-based scheduling: Preemptive multitasking allows tasks to be assigned


priorities, ensuring that critical tasks are executed in a timely manner.
● Fairness: The operating system ensures fair CPU time allocation among tasks,
preventing a misbehaving task from monopolizing resources.
● Responsiveness: Preemptive multitasking provides better responsiveness as tasks can
be quickly interrupted and switched, allowing for timely handling of critical events.

Disadvantages of preemptive multitasking:

● Complexity: Preemptive multitasking requires an operating system or real-time kernel,


which adds complexity to the system.
● Overhead: Context switching between tasks incurs some overhead, including saving and
restoring task contexts, which can impact performance in resource-constrained
systems.

Preemptive multitasking is commonly used in complex embedded systems where real-time


requirements, task prioritization, and fairness are critical. It requires hardware support, such as
a timer interrupt, to trigger task switches.

To implement multitasking in embedded systems, an operating system or real-time kernel is


employed. These software components provide services like task creation, deletion, suspension,
resumption, and synchronization mechanisms such as semaphores, mutexes, and message
queues

Pg - 59,60
RTC - Fault tolerance tech.. And full syllabus line

Real-time systems can be classified as hard real time systems in which the consequences of
missing a deadline can be catastrophic and soft real time systems in which the consequences
are relatively tolerable. In hard real time systems it is important that tasks complete within their
deadline even in the presence of a failure. Examples of hard real-time systems are control
systems in space stations, auto pilot systems and monitoring systems for patients with critical
conditions. In soft real-time systems it is more important to economically detect a fault as soon
as possible rather than to mask a fault. Examples of soft real-time systems are all kind of airline
reservation, banking, and E-commerce applications.

Real-time distributed systems are computing systems that consist of multiple


interconnected nodes or processors, distributed across a network, and designed to execute
real-time tasks. These systems are characterized by their ability to meet strict timing constraints
and deliver timely responses in a distributed environment.
Key characteristics of real-time distributed systems include:
1. Distribution: Real-time distributed systems are composed of multiple nodes or
processors connected through a network. These nodes can be geographically dispersed
and communicate with each other to exchange data and coordinate their activities.
2. Real-time Constraints: Real-time distributed systems operate under strict timing
constraints. They are designed to respond to events within predefined deadlines.
Meeting these timing requirements is crucial, as failures to respond in a timely manner
can result in system instability, decreased performance, or even catastrophic
consequences in safety-critical applications.
3. Fault Tolerance: Real-time distributed systems often incorporate fault tolerance
techniques to ensure system reliability and availability. Fault tolerance mechanisms,
such as redundancy, error detection, and recovery mechanisms, are employed to handle
failures that can occur in the distributed environment. These mechanisms aim to detect
faults, isolate them, and recover from failures while meeting the real-time constraints.
4. Scalability: Real-time distributed systems need to scale to handle varying workloads and
accommodate a growing number of nodes or processors. Scalability is achieved through
the use of distributed algorithms, load balancing techniques, and resource allocation
strategies that allow the system to efficiently distribute and manage tasks across the
distributed nodes.
5. Communication and Data Transfer: Communication in real-time distributed systems is a
critical aspect. Nodes exchange data and messages to coordinate their actions and
share information. Real-time communication protocols and mechanisms are employed to
ensure timely and reliable data transfer, minimizing communication delays and
guaranteeing data consistency and integrity.

2. Techniques for Fault Tolerance


Fault tolerance is the ability to continue operating despite the failure of a limited subset of their
hardware or software. So the goal of the system designer is to ensure that the probability of
system failure is acceptably small. There can be either hardware fault or software fault, which
disturbs the real time systems to meet their deadlines.

2.1 Fault Types There are three types of faults:


Permanent, intermittent, and transient.
A permanent fault does not die away with time, but remains until it is repaired as the affected
unit is replaced.
This is an intermittent fault cycle between the fault–active and fault benign states.

A transient fault dies away after some time.

2.2 Fault Detection


Fault detection can be done either online or offline. Online detection goes on in parallel with
normal system operation. Offline detection consists of running diagnostic tests.

2.2.1 Error Detection Techniques


In order to achieve fault tolerance, the first requirement is that transient faults have to be
detected. Several error-detection techniques are there against transient faults: watchdogs,
duplication and few others.
Watchdogs. In the case of watchdogs[2] program flow or transmitted data is periodically
checked for the presence of errors. The simplest watchdog schema, watchdog timer, monitors
the execution time of processes, whether it exceeds a certain limit.
A watchdog timer requires periodic "heartbeats" from the monitored task, indicating that
it is functioning correctly. If the watchdog timer does not receive the expected heartbeat
within a specified time window, it assumes that the task has failed or stalled and
initiates appropriate recovery actions, such as restarting the task or resetting the
system.
Duplication. Duplication is an approach to have multiple processors, which are supposed to put
out the same result and compare the results. A discrepancy indicates the existence of a fault .

2.2.2 Redundancy

1. Redundancy: Redundancy involves duplicating critical components or resources


to provide backup or alternative options in case of failures. There are different
types of redundancy:
● Hardware Redundancy: This involves duplicating hardware components such as
processors, memory modules, or input/output devices. These redundant
components work in parallel, and if one fails, the others take over the workload
seamlessly.
● Software Redundancy: In software redundancy, the critical tasks or algorithms
are implemented in multiple redundant software modules. These modules
execute the same functionality, and their results are compared to detect
discrepancies or errors. Voting mechanisms can be employed to determine the
correct result among the redundant modules.
● Time Redundancy: Time redundancy involves executing critical tasks multiple
times and comparing their results. The task is considered successful if a
consistent result is obtained across the redundant executions. This technique is
particularly useful for tasks that are non-deterministic or sensitive to transient
faults.

A typical example of where the above techniques are applied would be the autopilot system
on-board a large-sized passenger aircraft[4].
A passenger aircraft typically consists of a central autopilot system with two other backups. This
is an example of making a system with two other backups. This is an example of making a
system fault tolerant by adding redundant hardware. The two extra systems will not be used
unless the main system is completely broken.
However, this is not sufficient, since in the event that the main system starts behaving
erratically the lives of many people is in danger. The system is therefore also made resistant to
faults using software.
Generally, every process of the autopilot runs more than two copies, distributed across different
computers. The system then votes on the results of these process. To make the system even
more secure, some autopilots also employ the principle of design diversity. In this feature, not
only a software is run multiple times, but also each copy is written by a different engineering
team. The likelihood of same mistake being made by different engineering teams is very low.

However, such measures are only applied for highly critical systems. In general, hardware
redundancy is avoided as far as possible, due to limited resources that are available. Weight of
the system, power consumption, and price constraints make it difficult to employ high hardware
redundancy to make the system fault tolerant. Software redundancy is therefore, more
commonly used to increase fault tolerance of systems.
There are few factors that affect the diversity of the multiple versions.
The first factor is the requirements specification. A mistake in the specification causes a wrong
output to be delivered.
A second approach is the programming language. The nature of the language affects the
programming style greatly.
A third factor is the numerical algorithms that are used. Algorithms implemented to a finite
precision can behave quite differently for certain sets of inputs than do theoretical algorithms,
which assume infinite precision.
A fourth factor is the nature of the tools that are being used, the probability of common-mode
failure might increase. A fifth factor is the training and quality of the programmers and the
management structure. The major difficulty in software is laborintensive.

2.3 Fault Tolerance Techniques


1) TMR (Triple Modular Redundancy) Multiple copies are executed and error checking is
achieved by comparing results after completion. In this scheme, the overhead is always on the
order of the number of copies running simultaneously.
2) PB (Primary/Backup) The tasks are assumed to be periodic and two instances of each task (a
primary and a backup) are scheduled on a uni-processor system. One of the restrictions of this
approach is that the period of any task should be a multiple of the period of its preceding tasks.
It also assumes that the execution time of the backup is shorter than that of the primary. 3) PE
(Primary/Exception)

It is the same as PB method except that exception handlers are executed instead of backup
programs.
Primary Backup Fault Tolerance
This is the traditional fault-tolerant approach wherein both time as well as space exclusions are
used. The main idea behind this algorithm is that (a) the backup of a task need not execute if its
primary executes successfully, (b) the time exclusion in this algorithm ensures that no resource
conflicts occur between the two versions of any task, which might improve the schedulability.
Disadvantages in this system are that (a) there is no de-allocation of the backup copy, (b) the
algorithm assumes that the tasks are periodic (the times of the tasks are predetermined), (c)
compatible (the period of one process is an integral multiple of the period of the other process)
and execution time of the backup is shorter than that of the primary process.

You might also like