You are on page 1of 39

Ch-3

Computer Architecture
Processor to Memory Communication
• Already covered in last chapter
I/O to processor Communication
• The I/O subsystem of a computer provides an efficient mode of
communication between the central system and the outside
environment. It handles all the input-output operations of the
computer system.
• Input or output devices that are connected to computer are called
peripheral devices.
Interface
• Interface is a shared boundary btween two separate components of
the computer system which can be used to attach two or more
components to the system for communication purposes.
• Peripherals connected to a computer need special communication
links for interfacing them with the central processing unit.
• The purpose of communication link is to resolve the differences that
exist between the central computer and each peripheral.
These major differences are-
1. Peripherals are electro-mechnical and electromagnetic devices and CPU
and memory are electronic devices. Therefore, a conversion of signal
values may be needed.
2. The data transfer rate of peripherals is usually slower than the transfer
rate of CPU and consequently, a synchronization mechanism may be
needed.
3. Data codes and formats in the peripherals differ from the word format in
the CPU and memory.
• To Resolve these difference computer systems include special
hardware components between the CPU and Peripherals to supervises and
synchronizes all input and out transfers. These components are called
Interface Units because they interface between the processor bus and the
peripheral devices.
I/O Interface
• Peripherals connected to a computer need special communication
links for interfacing with CPU. In computer system, there are special
hardware components between the CPU and peripherals to control or
manage the input-output transfers. These components are called
input-output interface units because they provide communication
links between processor bus and peripherals. They provide a method
for transferring information between internal system and input-
output devices.
I/O Bus and Interface Modules

• Interface performs the following:


 Decodes the device address (device code)
 Decodes the commands (operation)
 Provides signals for the peripheral controller
 Synchronizes the data flow and supervises the transfer rate between peripheral and CPU or Memory

• The I/O bus consists of


 address lines- Processor places a particular address (unique for an I/O Dev.) on address lines.
 control lines-. Device which recognizes this address responds to the commands issued on the Control lines
 data lines -Processor requests for either Read / Write,The data will be placed on Data lines
• I/O commands (control Lines) that the interface may receive:
 Control command: issued to activate the peripheral and to inform it what to do.
 Status command: used to test various status conditions in the interface and the peripheral.
 Output data: causes the interface to respond by transferring data from the bus into one of its registers.
 Input data: is the opposite of the data output.
Modes of Transfer
• Data Transfer between the central computer and I/O devices may be
handled in a variety of modes.
• Some modes use CPU as an intermediate path, others transfer the
data directly to and from the memory unit.
• Data transfer to and from peripherals may be handled in one of three
possible modes.
• Programmed I/O
• Interrupt Driven I/O
• Direct Memory Access (DMA
Modes of Transfer
Programmed I/O
• Programmed I/O operations are the result of I/O instructions written
in the computer program.
• In programmed I/O, each data transfer in initiated by the instructions
in the CPU and hence the CPU is in the continuous monitoring of the
interface.
• Input instruction is used to transfer data from I/O device to CPU,store
instruction is used to transfer data from CPU to memory and output
instruction is used to transfer data from CPU to I/O device.
• This technique is generally used in very slow speed computer and is
not an efficient method if the speed of the CPU and I/O is different
Characteristics:

• Continuous CPU involvement


• CPU slowed down to I/O speed
• Simple
• Least hardware
Drawback of the Programmed I/O :
• The main drawback of the Program Initiated I/O was that the CPU has
to monitor the units all the times when the program is executing.
Thus the CPU stays in a program loop until the I/O unit indicates that
it is ready for data transfer. This is a time consuming process and the
CPU time is wasted a lot in keeping an eye to the executing of
program.
• To remove this problem an Interrupt facility and special commands
are used.
Interrupt Driven I/O
• In this method an interrupt facility an interrupt command is used to
inform the device about the start and end of transfer. In the
meantime the CPU executes other program. When the interface
determines that the device is ready for data transfer it generates an
Interrupt Request and sends it to the computer. When the CPU
receives such an signal, it temporarily stops the execution of the
program and branches to a service program to process the I/O
transfer and after completing it returns back to task, what it was
originally performing.
Interrupts
• Data transfer between the CPU and the peripherals is initiated by the
CPU. But the CPU cannot start the transfer unless the peripheral is
ready to communicate with the CPU. When a device is ready to
communicate with the CPU, it generates an interrupt signal. A
number of input-output devices are attached to the computer and
each device is able to generate an interrupt request.
• The main job of the interrupt system is to identify the source of the
interrupt. There is also a possibility that several devices will request
simultaneously for CPU communication. Then, the interrupt system
has to decide which device is to be serviced first.
Priority Interrupt
• A priority interrupt is a system which decides the priority at which
various devices, which generates the interrupt signal at the same
time, will be serviced by the CPU.
• Generally, devices with high speed transfer such as magnetic disks are
given high priority and slow devices such as keyboards are given low
priority.
• When two or more devices interrupt the computer simultaneously,
the computer services the device with the higher priority first.
Hardware Interrupt
• When the signal for the processor is from an external device or hardware
then this interrupts is known as hardware interrupt.
• Let us consider an example: when we press any key on our keyboard to do
some action, then this pressing of the key will generate an interrupt signal
for the processor to perform certain action. Such an interrupt can be of
two types:
• Maskable Interrupt
• The hardware interrupts which can be delayed when a much high priority
interrupt has occurred at the same time.
• Non Maskable Interrupt
• The hardware interrupts which cannot be delayed and should be processed
by the processor immediately.
Software Interrupt
• The interrupt that is caused by any internal system of the computer
system is known as a software interrupt. It can also be of two types:
• Normal Interrupt
• The interrupts that are caused by software instructions are called
normal software interrupts.
• Exception
• Unplanned interrupts which are produced during the execution of
some program are called exceptions, such as division by zero.
• 2 cycles with cpu
• 1 cycle without cpu
• DMA controller gets 4 parameters:
• 1. source address
• 2. target address
• 3. byte count
• 4. Which direction
Direct Memory Access
• In the Direct Memory Access (DMA) the interface transfer the data into and out
of the memory unit through the memory bus. The transfer of data between a fast
storage device such as magnetic disk and memory is often limited by the speed of
the CPU. Removing the CPU from the path and letting the peripheral device
manage the memory buses directly would improve the speed of transfer. This
transfer technique is called Direct Memory Access
• (DMA).During the DMA transfer, the CPU is idle and has no control of the memory
buses. A DMA Controller takes over the buses to manage the transfer directly
between the I/O device and memory. The CPU may be placed in an idle state in a
variety of ways. One common method extensively used in microprocessor is to
disable the buses through special control signals such as:
• Bus Request (BR)
• Bus Grant (BG)
Contd….
• These two control signals in the CPU that facilitates the DMA transfer. The Bus Request (BR)input
is used by the DMA controller to request the CPU. When this input is active, the CPU terminates
the execution of the current instruction and places the address bus, data bus and read write lines
into a high Impedance state. High Impedance state means that the output is disconnected.
• The CPU activates the Bus Grant (BG) output to inform the external DMA that the Bus Request
(BR) can now take control of the buses to conduct memory transfer without processor. When the
DMA terminates the transfer, it disables the Bus Request (BR) line. The CPU disables the Bus
Grant (BG), takes control of the buses and return to its normal operation.
• The transfer can be made in several ways that are:
• i. DMA Burst
• ii. Cycle Stealing
• DMA Burst :In DMA Burst transfer, a block sequence consisting of a number of memory words is
transferred in continuous burst while the DMA controller is master of the memory buses.
• Cycle Stealing :Cycle stealing allows the DMA controller to transfer one data word at a time, after
which it must returns control of the buses to the CPU.
Next Topics
• ISA
• RISC
• CISC
Instruction Set Architecture
• The Instruction Set Architecture (ISA) is the part of the processor that
is visible to the programmer or compiler writer. The ISA serves as the
boundary between software and hardware.
• It is a collection of Machine Language Instructions that a particular
processor understands and executes
• Instructions in a machine are dependent on computer i.e. different
processors have different instruction sets
Categorisation of ISA

The ISA of a processor can be described using 5 categories:


Operand Storage in the CPU
Where are the operands kept other than in memory?
Number of explicit named operands
How many operands are named in a typical instruction.
Operand location
Can any ALU instruction operand be located in memory?
Or must all operands be kept internaly in the CPU?
Operations
What operations are provided in the ISA.
Type and size of operands
What is the type and size of each operand and how is it specified?
Most common types of ISAs are:
• The 3 most common types of ISAs are:
• Stack - The operands are implicitly on top of the stack.
• Accumulator - One operand is implicitly the accumulator.
• General Purpose Register (GPR) - All operands are explicitly
mentioned, they are either registers or memory locations.
Example
Lets look at the assembly code of
C = A + B;
in all 3 architectures:
ISA types Contd….
• Not all processors can be neatly tagged into one of the above
categories. The i8086 has many instructions that use implicit
operands although it has a general register set. The i8051 is another
example, it has 4 banks of GPRs but most instructions must have the
A register as one of its operands.
What are the advantages and disadvantages of each of these
approaches?
ISA Types contd…
• Stack
• Advantages: Simple Model of expression evaluation (reverse polish). Short instructions.
Disadvantages: A stack can't be randomly accessed This makes it hard to generate eficient code. The stack
itself is accessed every operation and becomes a bottleneck.

• Accumulator
• Advantages: Short instructions.
Disadvantages: The accumulator is only temporary storage so memory traffic is the highest for this
approach.

• GPR
• Advantages: Makes code generation easy. Data can be stored for long periods in registers.
Disadvantages: All operands must be named leading to longer instructions.
• Earlier CPUs were of the first 2 types but in the last 15 years all CPUs made are GPR processors. The 2 major
reasons are that registers are faster than memory, the more data that can be kept internaly in the CPU the
faster the program wil run. The other reason is that registers are easier for a compiler to use.
RISC and CISC
• Reduced Set Instruction Set Computer (RISC) –
The main idea behind is to make hardware simpler by using an instruction set
composed of a few basic steps for loading, evaluating and storing operations just
like an addition command will be composed of loading data, evaluating and
storing.
• Complex Instruction Set Computer (CISC) –
The main idea is to make hardware complex as a single instruction will do all
loading, evaluating and storing operations just like a multiplication command will
do stuff like loading data, evaluating and storing it.
• Both approaches try to increase the CPU performance
• RISC: Reduce the cycles per instruction at the cost of the number of instructions
per program.
• CISC: The CISC approach attempts to minimize the number of instructions per
program but at the cost of increase in number of cycles per instruction.
RISC Processor
• It is known as Reduced Instruction Set Computer. It is a type of
microprocessor that has a limited number of instructions. They can
execute their instructions very fast because instructions are very
small and simple.
• RISC chips require fewer transistors which make them cheaper to
design and produce. In RISC, the instruction set contains simple and
basic instructions from which more complex instruction can be
produced. Most instructions complete in one cycle, which allows the
processor to handle many instructions at same time.
• In this instructions are register based and data transfer takes place
from register to register.
RISC
• RISC stands for Reduced Instruction Set Computer. To execute each
instruction, if there is separate electronic circuitry in the control unit,
which produces all the necessary signals, this approach of the design
of the control section of the processor is called RISC design. It is also
called hardwired approach.
• Examples of RISC processors:
• IBM RS6000, MC88100
• DEC’s Alpha 21064, 21164 and 21264 processors
Features of RISC Processors:

• The standard features of RISC processors are listed below:


• RISC processors use a small and limited number of instructions.
• RISC machines mostly uses hardwired control unit.
• RISC processors consume less power and are having high
performance.
• Each instruction is very simple and consistent.
• RISC processors uses simple addressing modes.
• RISC instruction is of uniform fixed length.
Features of RISC contd..
• Certain design features have been characteristic of most RISC
processors
• One Cycle Execution Time. RISC processors have a CPI (clock per
instruction) of one cycle.
• Pipelining. A technique that allows for simultaneous execution of
parts, or stages, of instructions to more efficiently process
instructions;
• Large Number of Registers. The RISC design philosophy generally
incorporates a larger number of registers to prevent in large amounts
of interactions with memory
Features of CISC processors
• The standard features of CISC processors are listed below:
• CISC chips have a large amount of different and complex instructions.
• CISC machines generally make use of complex addressing modes.
• Different machine programs can be executed on CISC machine.
• CISC machines uses micro program control unit.
• CISC processors are having limited number of registers.
Virtual Memory
•?

You might also like