Professional Documents
Culture Documents
Prev-2022
1.What is an embedded system? Comment on the embedded system characteristics.
Ans- An embedded system is an electronic or electro mechanical system, designed to perform a
specific function and it is a combination of both hardware and software.
An ES is a combination of 3 things:
(i)Hardware
(ii) Software
(iii) Mechanical components
It is supposed to do one or set of specific tasks only.
Example- Washing machine
A washing machine from an ES point of view has:
Hardware- Buttons, Display and buzzer electronic circuitry.
Software- It has a chip on the circuit that holds the software which drives controls and monitors
the various operations possible.
Mechanical components- The internals of a washing machine which actually wash the cloths,
control the input and output of water the chassis itself.
2. What is the role of microprocessor or microcontroller in embedded system design? List down
other technologies usable for embedded system.
2
Ans- The microprocessor or microcontroller plays a central role in embedded system design as the
"brain" of the system. It's the hardware component responsible for executing the software
instructions that define the system's behavior. The microprocessor or microcontroller typically
has a number of features that make it well-suited for embedded systems, such as:
• Digital signal processors (DSPs): DSPs are specialized processors that are
designed for processing digital signals. They are often used in applications that
require real-time processing of audio or video signals.
• Field-programmable gate arrays (FPGAs): FPGAs are programmable chips that can
be customized to meet the specific requirements of an embedded system. They are
often used in applications that require a high degree of flexibility or performance.
• Application-specific integrated circuits (ASICs): ASICs are custom-designed chips
that are optimized for a specific application. They are often used in applications
where performance or cost is critical.
• Reconfigurable computing platforms: Reconfigurable computing platforms are
systems that allow the hardware to be reconfigured to meet the changing needs of
an application. They are often used in applications where the requirements are not
well-defined or where the system needs to be upgraded frequently.
1. Requirements gathering: This step involves gathering the requirements for the
embedded system from the customer or other stakeholders. The requirements can
be functional or non-functional. Functional requirements specify what the system
should do, while non-functional requirements specify how the system should
perform, such as its performance, power consumption, and cost.
2. Specification formulation: This step involves formulating a specification for the
embedded system based on the requirements gathered in the previous step. The
specification should be clear, concise, and unambiguous.
3. Architecture design: This step involves designing the architecture of the embedded
system. The architecture defines the high-level components of the system and their
interactions.
3
The ES design process is iterative, meaning that the steps may be repeated as needed.
The specific steps and their order may vary depending on the complexity of the embedded
system and the design methodology used.
Ans- In the context of embedded system design, requirements are the descriptions of what
the system should do and how it should perform. They are gathered from the customer or
other stakeholders and are used to guide the design process.
• Functional requirements: These requirements specify what the system should do.
For example, a functional requirement for a car's anti-lock braking system (ABS)
might be that it should prevent the wheels from locking up during braking.
• Non-functional requirements: These requirements specify how the system should
perform. For example, a non-functional requirement for the ABS might be that it
should have a response time of less than 20 milliseconds.
5.Take an example application ‘GPS Moving Map’. Create a requirement form which
well describes the requirements of this application, for its design.
System Requirements
4
• Functional Requirements:
o The system shall display a map of the current location of the user.
o The system shall allow the user to zoom in and out of the map.
o The system shall allow the user to pan the map.
o The system shall allow the user to select a destination on the map.
o The system shall calculate the route from the current location of the user to
the selected destination.
o The system shall display the route on the map.
o The system shall provide turn-by-turn directions to the destination.
• Non-Functional Requirements:
o Performance: The system shall display the map and provide turn-by-turn
directions in real time.
o Safety: The system shall not display any false or misleading information.
o Security: The system shall protect against unauthorized access to the map
data.
o Usability: The system shall be easy to use by a wide range of users.
o Maintainability: The system shall be easy to maintain and update.
Additional Requirements:
This is just a sample requirement form, and the specific requirements for a GPS moving
map application will vary depending on the specific needs of the user.
Ans- The language used for describing specifications in embedded systems can vary
depending on the specific needs of the project. However, some common languages used
for describing specifications in embedded systems include:
• Natural language: Natural language is the most common language used for
describing specifications in embedded systems. It is easy to understand and use,
but it can be ambiguous and difficult to interpret.
• UML: UML is a modeling language that can be used to describe the behavior,
structure, and interactions of an embedded system. It is a popular choice for
describing specifications in embedded systems because it is well-understood and
supported by a variety of tools.
5
Starts with a high-level overview of the system Starts with the individual components of the
and then breaks it down into smaller and smaller system and then builds them up into a larger
parts. system.
Easier to understand and manage for large More difficult to understand and manage for
systems. large systems.
Better for complex systems with a lot of Better for simple systems with few
dependencies. dependencies.
More efficient in terms of time and resources for Less efficient in terms of time and resources
large systems. for large systems.
More prone to errors for large systems. Less prone to errors for large systems.
More suitable for systems with strict Less suitable for systems with strict
requirements. requirements.
More suitable for systems with a well-defined Less suitable for systems with a well-defined
architecture. architecture.
More suitable for systems with a clear Less suitable for systems with a clear
understanding of the requirements. understanding of the requirements.
6
More suitable for systems with a team of Less suitable for systems with a team of
experienced engineers. experienced engineers.
RISC CISC
Instructions are typically one or two bytes Instructions can be one to four bytes
long long
Ans- The ARM (Advanced RISC Machines) programming model is a foundational framework that guides
software development for ARM-based processors, which are widely used in various embedded systems,
mobile devices, and other applications. The ARM architecture is characterized by its RISC (Reduced
Instruction Set Computer) design philosophy, which emphasizes simplicity, efficiency, and streamlined
instruction execution. The ARM programming model provides a structured approach to writing software
that harnesses the capabilities of ARM processors.
Here are some of the key features of the ARM programming model:
• Reduced instruction set: The ARM ISA has a small number of simple instructions.
This makes the processor faster and easier to design and implement.
• Register-based architecture: The ARM processor has a large number of registers.
This allows for efficient data access and manipulation.
• Memory-mapped I/O: The ARM processor uses memory-mapped I/O. This means
that I/O devices are accessed through memory addresses, just like regular data.
This makes it easier to program I/O devices.
• Interrupts: The ARM processor supports interrupts. This allows the processor to
respond to events that occur outside of the main program.
• Thumb mode: The ARM processor has a Thumb mode that allows for smaller and
more efficient code.
The ARM instructions can also be categorized by their length. The majority of ARM
instructions are 32 bits long, but there are also a few 16-bit instructions. The 16-bit
instructions are used in Thumb mode, which is a special mode that allows for smaller and
more efficient code.
10.What are the conditional codes in ARM? Explain with few examples. Take an example C code
with conditional statement and implement the same in ARM assembly language?
8
Ans- Conditional codes in ARM are used to control the flow of execution of the program.
They are used in conjunction with the branch instructions to cause the program to branch
to a different location depending on the outcome of a condition.
• EQ: Equal
• NE: Not equal
• CS: Carry set
• CC: Carry clear
• MI: Minus
• PL: Plus
• VS: Overflow set
• VC: Overflow clear
• HI: Greater than or equal
• LS: Less than
• GE: Greater than
• LT: Less than or equal
The conditional codes are encoded in the first two bits of the instruction. The remaining
bits of the instruction are used to specify the operation to be performed, the destination
register, and the source registers.
For example, the instruction BEQ r0, r1, label will branch to the label if the contents of
register r0 are equal to the contents of register r1.
C
int x = 10;
int y = 20;
if (x > y) {
printf("x is greater than y");
} else {
printf("x is less than or equal to y");
}
Use code with caution. Learn more
content_copy
ARM assembler
mov r0, #10
mov r1, #20
cmp r0, r1
bge greater
blt less_or_equal
9
greater:
ldr r0, =str_greater
bl printf
b end
less_or_equal:
ldr r0, =str_less_or_equal
bl printf
end:
nop
The cmp instruction compares the contents of register r0 to the contents of register r1. The
bge instruction branches to the label greater if the result of the comparison is greater than
or equal to zero. The blt instruction branches to the label less_or_equal if the result of the
comparison is less than zero.
The ldr instruction loads the address of the string str_greater into register r0. The bl
instruction calls the printf function, which prints the string to the console.
The nop instruction is a no-operation instruction. It is used to pad the code to the next
instruction boundary.
Ans- The SHARC (Super Harvard Architecture Computer) is a digital signal processing (DSP) architecture
designed by Analog Devices for high-performance signal processing applications. The SHARC programming
model is tailored to efficiently handle real-time signal processing tasks and complex mathematical
computations. Here are some key features of the SHARC programming model:
• Reduced instruction set: The SHARC ISA has a small number of simple
instructions. This makes the processor faster and easier to design and implement.
• Register-based architecture: The SHARC processor has a large number of
registers. This allows for efficient data access and manipulation.
• Harvard architecture: The SHARC processor uses a Harvard architecture, which
means that there are separate buses for instructions and data. This allows for faster
instruction fetching and data access.
• Interrupts: The SHARC processor supports interrupts. This allows the processor to
respond to events that occur outside of the main program.
The SHARC programming model is a popular choice for embedded systems because it is
efficient, portable, and easy to use. It is used in a wide variety of embedded systems,
including audio processing, digital signal processing, and communication systems.
12.Differentiate between I/O instructions and memory mapped I/O. Why do most CPU
architectures use memory-mapped I/O? Expalin.
Ans-
I/O Instructions: I/O (Input/Output) instructions are a set of instructions used by a CPU to communicate
with external devices, such as input devices (keyboard, mouse) and output devices (displays, printers).
10
These instructions provide a direct way for the CPU to send and receive data from peripherals. I/O
instructions involve specific machine code instructions dedicated to I/O operations.
Memory-Mapped I/O: Memory-mapped I/O is a technique where the I/O devices are treated as if
they were memory locations. Instead of having separate I/O instructions, the CPU uses regular
memory access instructions (such as load and store instructions) to read from and write to the I/O
devices. Each I/O device is assigned a unique memory address in the address space, and reading or
writing to that address triggers the corresponding I/O operation.
Typically used for simple I/O Typically used for complex I/O
Usage
devices devices
Most CPU architectures use memory-mapped I/O because it is more efficient and easier to use.
Memory-mapped I/O allows the CPU to access I/O devices using the same instructions that are
used to access memory. This makes it easier for programmers to write code that interacts with I/O
devices.
13.What is an interrupt? Why do you use interrupted I/O? Discuss about interrupt prioritization and
interrupt overhead?
Ans- An interrupt is a mechanism used in computer systems to temporarily halt the normal execution of a
program and divert the processor's attention to a specific event or condition that requires immediate
attention. Interrupts are essential for handling time-sensitive events, such as hardware events (like user
input, hardware errors, or I/O completion) and software events (like exceptions or system calls).
There are several reasons why you would use interrupt I/O.
• Interrupts allow the processor to save and restore its state. When an interrupt
occurs, the processor saves its current state, such as the instruction pointer and the
registers, so that it can return to what it was doing after the interrupt is handled.
This ensures that the processor does not lose any data or corrupt its state.
Interrupt prioritization is the process of assigning a priority to each interrupt. The priority of
an interrupt determines the order in which interrupts are handled. Interrupts with a higher
priority are handled before interrupts with a lower priority.
Interrupt overhead is the amount of time it takes to handle an interrupt. Interrupt overhead
includes the time it takes to save the processor's state, handle the interrupt, and restore
the processor's state.
The amount of interrupt overhead can vary depending on the processor architecture and
the type of interrupt. In general, interrupt overhead is small, but it can be significant in
some cases.
The instructions are passed through the pipeline in a continuous, orderly manner. As soon
as one instruction finishes executing in a stage, the next instruction begins executing in
that stage. This allows the processor to execute multiple instructions at the same time.
The number of stages in a pipelined processor can vary depending on the processor
architecture. However, the more stages there are, the more instructions can be
overlapped, and the faster the processor will be.
Ans- The three-stage ARM pipeline consists of three main stages: Fetch, Decode, and
Execute. In this pipeline, each stage performs a specific operation on the instruction being
processed, allowing for some degree of parallelism and efficient instruction execution.
However, when dealing with branch instructions, a phenomenon known as "branch
penalty" can occur.
12
Consider the scenario where a branch instruction is encountered in the Fetch stage of the
pipeline. A branch instruction changes the flow of program execution by modifying the
program counter (PC) to a new target address. In a three-stage pipeline, the branch
instruction goes through the following stages:
1. Fetch Stage: The branch instruction is fetched from memory, and its target address
is determined based on the branch condition.
2. Decode Stage: The branch instruction is decoded, and the decision is made
whether to take the branch or not.
3. Execute Stage: If the branch is taken, the new target address is calculated and
prepared for the next instruction fetch. If the branch is not taken, the next
sequential instruction's address is prepared.
Ans-
Latency and throughput are two important measures of CPU performance. Latency is the
time it takes for a CPU to complete a task. Throughput is the number of tasks that a CPU
can complete in a given amount of time.
A CPU with low latency can complete tasks quickly, while a CPU with high throughput can
complete a large number of tasks in a given amount of time.
The relationship between latency and throughput depends on the specific application. For
some applications, latency is more important than throughput. For example, a real-time
application that needs to respond to events quickly will require a CPU with low latency. For
other applications, throughput is more important than latency. For example, a web server
that needs to serve a large number of requests will require a CPU with high throughput.
In general, a CPU with low latency and high throughput will have the best performance.
However, this may not always be possible or practical. In some cases, it may be
necessary to sacrifice latency for throughput or vice versa.
Here are some examples of how latency and throughput can affect CPU performance:
• A CPU with low latency will be better for applications that require quick responses,
such as video games or real-time audio processing.
• A CPU with high throughput will be better for applications that need to process a
large number of tasks, such as web servers or databases.
• A CPU with a balance of latency and throughput will be better for general-purpose
applications.
Ans-
18. How do we interface memory chip with CPU? Explain. Also explain the case of
multichip memory interfacing.
Ans- There are two main ways to interface a memory chip with a CPU:
• Direct Memory Access (DMA): In DMA, the CPU sends a command to the DMA
controller to read or write data to memory. The DMA controller then takes over the
bus and transfers the data directly to or from memory, without involving the CPU.
This can be faster than the CPU accessing memory directly, as the DMA controller
can transfer data more quickly.
• Memory-mapped I/O: In memory-mapped I/O, the memory chip is mapped to a
specific address range in the CPU's address space. This means that the CPU can
access the memory chip using the same instructions that it uses to access memory.
This is simpler to implement than DMA, but it can be slower, as the CPU has to wait
for the memory chip to respond.
In the case of multichip memory interfacing, the memory chips are typically connected in a
daisy chain. This means that the output of one memory chip is connected to the input of
the next memory chip. The CPU can then access the memory chips by sending a single
address to the first memory chip in the chain. The first memory chip then passes the
address to the next memory chip in the chain, and so on.
The number of memory chips that can be connected in a daisy chain depends on the
number of address lines that are available. The more address lines that are available, the
more memory chips that can be connected.
Here are some of the factors to consider when interfacing a memory chip with a CPU:
• The speed of the CPU: The faster the CPU, the faster the memory chip needs to
be.
• The size of the memory: The larger the memory, the more memory chips will be
needed.
• The cost of the memory: The cost of the memory chips will need to be considered.
• The complexity of the interface: The interface should be as simple as possible to
implement.
The best way to interface a memory chip with a CPU will depend on the specific
application. In general, DMA is the best choice for applications that require high
performance, while memory-mapped I/O is the best choice for applications that require
simplicit.
Ans- Debugging is the process of identifying and resolving errors, bugs, and issues in software code or
hardware systems. Various debugging techniques are used to diagnose, locate, and fix problems in order to
ensure the correct functioning of a program or system. Here's a brief overview of common debugging
techniques:
14
• Print statements: This is a simple but effective technique for debugging. By printing
out the values of variables and expressions, you can track down the source of an
error.
• Breakpoints: This allows you to stop the execution of a program at a specific point.
This can be useful for inspecting the state of the program at that point.
• Stepping: This allows you to execute the program one instruction at a time. This can
be useful for tracing the execution of the program and identifying the source of an
error.
• Watchpoints: This allows you to monitor the value of a variable or expression. This
can be useful for detecting changes in the value of a variable that may be causing
an error.
• Logging: This allows you to record the execution of a program. This can be useful
for tracking down errors that occur only intermittently.
• Unit testing: This is a technique for testing individual units of code. This can help to
identify errors early in the development process.
• Integration testing: This is a technique for testing how different units of code interact
with each other. This can help to identify errors that occur when different parts of
the code are combined.
• System testing: This is a technique for testing the entire system. This can help to
identify errors that occur when the system is used in a real-world environment.
20.Discuss about different types of code optimization used while assembly code
compilation.
Ans- Code optimization is a crucial step in the compilation process that aims to improve the efficiency,
speed, and resource usage of the generated machine code. This is particularly important in assembly
language programming, where developers have fine-grained control over the code generation process.
There are many different types of code optimization used while assembly code
compilation. Here are some of the most common ones:
• Instruction scheduling: This is the process of ordering instructions in the code. This
can improve the performance of the code by reducing the number of stalls and
bubbles in the pipeline.
• Code size reduction: This is the process of reducing the size of the executable file.
This can be done by removing unused code, data, and symbols.