You are on page 1of 3

Ans 1

ARM Pipelining : A Pipelining is the mechanism used by RISC(Reduced instruction set computer)
processors to execute instructions, by speeding up the execution by fetching the instruction, while
other instructions are being decoded and executed simultaneously.
ARM 7 – It has 3 stage pipelining as shown in the figure. It can complete it’s process in 3 cycles. It has
the basic F&E cycle leading to optimum throughput. This is why the ARM 7 has the lowest throughput
as compared to that of it’s other family members. It processes 32bit data.
ARM 9 – Pipelining in ARM 9 is similar to ARM 7 but with 5 stages. It takes 5 cycles to complete the
process.

Fetch- It will fetch instructions from memory.


Decode- It decodes the instructions that were fetched in the first cycle.
ALU – It executes the instruction that has been decoded in the previous stage. LS1(Memory)
Loads/Stores the data specified by load or store instructions. LS2(Write) Extracts (zero or sign)
extends the data loaded by byte or half word load instruction. Because of an increase in stages and
efficiency, the throughput is 10%-13% higher than ARM 7. Core frequency of ARM 9 is slightly higher
than that of ARM 7.
ARM 10 –
It is a six stage pipeline. Which in turn takes 6 cycles to complete the process. Same as that of ARM 9
but with an issue stage which checks whether the instruction is ready to get decoded in the current
stage or not. It nearly doubles the throughput than that of ARM 7. The core frequency is higher than
that of ARM 9.

2Ans

Multiple processor scheduling or multiprocessor scheduling focuses on designing the


system's scheduling function, which consists of more than one processor. Multiple CPUs
share the load (load sharing) in multiprocessor scheduling so that various processes run
simultaneously. In general, multiprocessor scheduling is complex as compared to single
processor scheduling. In the multiprocessor scheduling, there are many processors, and
they are identical, and we can run any process at any time.

The multiple CPUs in the system are in close communication, which shares a common bus,
memory, and other peripheral devices. So we can say that the system is tightly coupled.
These systems are used when we want to process a bulk amount of data, and these
systems are mainly used in satellite, weather forecasting, etc.

There are cases when the processors are identical, i.e., homogenous, in terms of their
functionality in multiple-processor scheduling. We can use any processor available to run
any process in the queue.

3Ans

Direct Memory Access (DMA) is a process of transferring data from one memory location to
another without the direct involvement of the processor (CPU). The main benefit of using DMA is
more efficient data movement in the embedded system
There are many different types of DMA implementations, some of them for very specific use cases.
In this article, we will focus on the general principles of operation. Let’s start with the simple
system shown below in Fig.1.

The functional unit that performs the operations for directly accessing the memory is called a DMA
controller. On the simplified block diagram (Fig.1) we have a CPU, a RAM, a peripheral unit, and
a DMA controller. All except the peripheral unit are connected on the same bus. As the CPU and
the DMA controller must be able to initiate transfers they have master interfaces. Although the
goal is to have DMA that operates independently, the CPU is the one that has to configure the
DMA controller to perform transfers. The DMA controller can be dedicated to a specific DMA-
capable peripheral unit (as shown in Fig. 1) or can be a more general DMA able to access various
types of memory-mapped peripherals.

4ans

UART ISA

Industry Standard Architecture (ISA) is the 16-bit internal bus of IBM PC/AT and similar computers based on
the Intel 80286 and its immediate successors during the 1980s. The bus was (largely) backward compatible with
the 8-bit bus of the 8088-based IBM PC, including the IBM PC/XT as well as IBM PC compatibles.
Originally referred to as the PC bus (8-bit) or AT bus (16-bit), it was also termed I/O Channel by IBM. The ISA
term was coined as a retronym by competing PC-clone manufacturers in the late 1980s or early 1990s as a
reaction to IBM attempts to replace the AT-bus with its new and incompatible Micro Channel architecture.

The 16-bit ISA bus was also used with 32-bit processors for several years. An attempt to extend it to 32 bits,
called Extended Industry Standard Architecture (EISA), was not very successful, however. Later buses such as VESA
Local Bus and PCI were used instead, often along with ISA slots on the same mainboard. Derivatives of the AT bus
structure were and still are used in ATA/IDE, the PCMCIA standard, Compact Flash, the PC/104 bus, and
internally within Super I/O chips.

5Ans 1. Direct Addressing


Direct Addressing is done through a 9-bit address. This address is obtained by
connecting 7th bit of direct address of an instruction with two bits (RP1, RP0) from
STATUS register as is shown on the following picture. Any access to SFR registers is
an example of direct addressing.
CODE:

Bsf STATUS, RP0 ;Bankl


movlw 0xFF ;w=0xFF
movwf TRISA ;address of TRISA register is taken from
;instruction movwf

You might also like