You are on page 1of 14

Computer Organization & Architecture

Page 1

1.1 Introduction of Computer Organization and Architecture
Computer Architecture
Computer Architecture refers to those attributes of a system visible to the programmer.
These attributes have direct impact on the logical execution of the program.
Architectural attributes include Instruction set, number of bits used to represent data
types (e.g., numbers, characters), I/O mechanisms, and addressing techniques.

Computer Organization
Computer Organization refers to the operational units and their interconnections that
realize the architectural specifications.
Organizational attributes include hardware details transparent to the programmer, such as
control signals, interfaces between the computer and peripherals, and the memory
technology used.
For example, it is an architectural design issue whether a computer will have a multiply
instruction. It is an organizational issue whether the instruction will be implemented by a special
multiply unit or by a mechanism that makes repeated use of the add unit of the system.

1.2 Functional View and Structure of Computer
A computer is a complex system, consisting of millions of elementary electronic
Structure refers to the way in which these components are interrelated to each other.
Function refers to the operation of each individual component as part of the structure.
Figure 1 depicts the basic functions that a computer can perform. In general term, there
are only four functions:
Data processing
Data storage
Data movement
The computer must be able to process data.
The processing of the data is performed inside the computer in a special unit called as
Arithmetic and Logic Unit (ALU).
It is also essential that a computer store data.
University Question:
Explain with suitable examples the difference between computer architecture and computer
organization. [5 Marks, Dec 2008, June 2009, June 2010, Dec 2011]
Computer Organization & Architecture

Page 2

When data is processed, it is stored in either CPU registers, ROM or RAM.

Figure 1. Functional View of the Computer
The computer must be able to move data between itself and outside world.
This is done between CPU and memory, memory and memory, I/O and memory.
Finally, there must be control of these three functions. Control unit manages system
resources, controls all activities in I/O and memory.
Figure 2 depicts the internal structure of the computer. There are four main structural
Central Processing Unit (CPU): Controls the operation of the computer and
performs its data processing functions; often simply referred to as processor.
Main Memory: Stores data.
I/O: Moves data between the computer and its external environment.
System Interconnection: Some mechanism that provides for communication among
CPU, main memory, and I/O. A common example of system interconnection is by
means of a system bus, consisting of a number of conducting wires to which all other
component attach.
Computer Organization & Architecture

Page 3

The most interesting and the most complex component is the CPU. Its major structural
components are as follows:
Control Unit: Controls the operation of the CPU and hence the computer
Arithmetic and Logic Unit (ALU): Performs the computers data processing functions
Registers: Provides storage internal to the CPU.
CPU Interconnection: Some mechanism that provides for communication among the
control unit, ALU, and registers.

Figure 2. The Structure of Computer
Computer Organization & Architecture

Page 4

1.3 Von Neumann Model
Von Neumann model is also called stored-program architecture.
Figure 3 shows the Von Neumann Architecture. It consists of the following components:
Memory: In computer, memory holds data and program that processes data. Typically, it
is called as RAM.
Control Unit: It manages the process of moving data and program in and out of the
memory and also deals with execution of program instructions one at a time. This
includes registers to hold intermediate values. The accumulator is one such register.
Input- Output: This architecture allows for the idea that a person needs to interact with
the machine. Whatever values those are passed to and forth are stored once again in
some internal registers.

Figure 3. Von Neumann Model
Arithmetic Logic Unit: This part of the architecture is solely involved with carrying out
calculations upon the data. All the usual operations like add, subtract, multiply and
division are performed. Along with these comparison operations like Greater than,
Less than and Equal to are also performed.
Bus: The arrow between the components implies flow of information. This arrow in
modern computers is called bus. There are 3 types of bus:
Computer Organization & Architecture

Page 5

Address Bus: Identify locations in memory.
Data Bus: Allow flow of data and program instructions.
Control Bus: Control the use of address and data bus. Transmits command and
timing information among the system modules.

1.4 Evolution of Computers
i. Mechanical Era
Blaise Pascal made the very first attempt towards the automatic computing. He
developed a device called Pascaline which consisted of lots of gears and chains
and used to perform repeated additions and subtractions.
Later many attempts were made in this direction. Charles Babbage designed two
The Difference Engine: Used to solve calculations on large number using
a formula. It was also used for solving the polynomial and trigonometric
The Analytical Engine: It was a general purpose computing device, which
could be used for performing any mathematic operation automatically. It
consisted of the following components:
a) The Store: It is the memory unit consisting of set of counter
b) The Mill: It is the arithmetic unit which is capable of performing
the four basic arithmetic operations.
c) Cards: There are basically two types of cards:
1. Operation Cards: Selects one of the four arithmetic
operations by activating the mill to perform the selected
2. Variable Cards: Selects the memory location to be used by
the mill for a particular operation.
ii. First Generation
The first electronic computer was constructed using vacuum tubes. The first
computer constructed using vacuum tube technology was ENIAC (Electronic
Numerical Integrator and Calculator). Operations were done on decimal numbers.
It stored programs and data in separate memories.
University Question:
Explain Von Neumann Model in detail. [5/10 Marks, Dec 2009, June 2010, June 2011, June
2012, Dec 2012]
Computer Organization & Architecture

Page 6

EDVAC (Electronic Device Variable Computer) stores programs and data in the
same memory. Operations were done on binary numbers to minimize hardware
cost. It processed data bit by bit.
iii. Second Generation
Vacuum tubes were replaced by transistors.
Transistor is smaller, cheaper and dissipates less heat than vacuum tubes.
It had greater speed and large memory capacity.
More registers were added to the CPU to facilitate data and address manipulation.
Floating point number was introduced to support scientific applications.
High level programming languages were introduced.
iv. Third Generation
Integrated Circuits (ICs) were introduced.
IC technology provided the following features:
Higher speed
Smaller size
Lower hardware cost
Lower power consumption
More reliable circuit
v. VLSI Era
VLSI allowed manufacturers to fabricate CPU, main memory or even all the
electronic circuit of a computer on a single IC at a very low cost.
This resulted in a new class of machines ranging from portable personal
computers to supercomputers that contain thousands of CPUs.
Two important impact of VLSI are:
Semiconductor memory

1.5 Performance Measure of Computer Architecture
1. Clock Speed and Instruction per Second:
Operations performed by a processor, such as fetching an instruction,
decoding the instruction, performing an arithmetic operation, and so on, are
governed by the system clock.
Typically, all operations begin with the pulse of the clock.
Speed of the processor is measured by the pulse frequency produced by the
clock, measured in cycles per second or Hertz.
The rate of pulses is called the clock rate, or clock speed.
2. Instruction Execution Rate:
A processor is driven by a clock with a constant frequency, f or, equivalently,
a constant cycle time , where

Computer Organization & Architecture

Page 7

The size of the program is given in terms of instruction count (I
). Different
machine instructions may require different number of clock cycles to execute.
Therefore, average cycles per instruction (CPI) become an important
parameter for measuring the time needed to execute a program on a given
CPI depends both on the machine and the program. Let CPI
be the number of
cycles required for instruction type i and I
be the numbers of executed
instructions of type i for a given program. Then we can calculate an overall
CPI as follows:

If I
is the number of instructions in a given program the CPU time needed to
execute the program is estimated by:

3. MIPS Rate:
The processor speed is often measured in terms of millions of instructions per
second (MIPS). This is called the MIPS rate of a given processor. MIPS rate
varies with respect to a number of factors:
- Clock rate (f)
- Instruction count (I
If a program having I
number of instruction requires T seconds of CPU time
Time required to execute 1 instruction =

Time required to execute 1 million (10
) instruction =

4. Throughput Rate:

is defined as the number of programs a system can execute per

unit time.

Computer Organization & Architecture

Page 8

1. A benchmark program is run on an 80 MHz processor. The executed program consists of
100,000 instruction executions, with the following instruction mix and clock cycle count:
Instruction Type Instruction Count Cycles per Instruction
Integer arithmetic 45000 1
Data transfer 32000 2
Floating point 15000 2
Control transfer 8000 2
Determine the effective CPI, MIPS rate, and execution time for this program.

= {(45000 x 1)+(32000 x 2)+(15000 x 2)+(8000 x 2)} x 10/ 10

=1.55 clock cycles/instruction

Here, f=80MHz
MIPS rate = (80 x 10) / (1.55 x 10)
=51.61 MIPS

T = (100000 x 1.55) / (80 x 10)
= 1.9375ms

2. Consider two different machines, with two different instruction sets, both of which have a
clock rate of 200 MHz. The following measurements are recorded on the two machines
running a given set of benchmark programs:

Computer Organization & Architecture

Page 9

Instruction Type
Instruction Count

Cycles per Instruction
Machine A
Arithmetic and logic
Load and store


Machine B
Arithmetic and logic
Load and store



a. Determine the effective CPI, MIPS rate, and execution time for each machine.
b. Comment on the results.
( )
( ) ( ) ( ) ( ) { }
( )
; 22 . 2
10 4 2 4 8
10 3 4 4 2 3 4 1 8
+ + +
+ + +


i i

; 09 . 90
10 22 . 2
10 200

rate MIPS

Execution time,
; 1998 . 0
10 200
22 . 2 10 18
A c A

= = t

( )
( ) ( ) ( ) ( ) { }
( )
; 92 . 1
10 4 2 8 10
10 3 4 4 2 2 8 1 10
+ + +
+ + +


i i

; 17 . 104
10 92 . 1
10 200

rate MIPS

Execution time,
. 23 . 0
10 200
92 . 1 10 24
B c B

= = t

b. Although machine B has a higher MIPS than machine A, it requires a longer CPU time to
execute the same set of benchmark programs.
Computer Organization & Architecture

Page 10

Bus Concept
- The major computer system components (processor, main memory, I/O modules) need to
be interconnected in order to exchange data and control signals.
- A bus is a communication pathway connecting two or more devices
- It is a shared transmission medium. Multiple devices connect to the bus, and a signal
transmitted by any one device is available for reception by all other devices attached to
the bus. If two devices transmit during the same time period, their signals will overlap
and become garbled. Thus, only one device at a time can successfully transmit.
- A bus that connects major computer components (processor, memory, I/O) is called a
system bus.
Bus Structure
- A system bus consists, typically, of from 50 to hundreds of separate lines.
- Each line is assigned a particular meaning or function. Each line is capable of
transmitting signals representing binary 0 and binary 1.
- These lines can be classified into three functional groups as shown in figure 4: data,
address, and control lines.

Figure 4: Bus Structure
Data Bus
- The data lines provide a path for moving data among system modules. These lines
collectively, are called the data bus.
- The data bus may consist of 32, 64, 128 or even more separate lines, the number of lines
being referred to as the width of the data bus.
- Because each line can carry only 1 bit at a time, the number of lines determines how
many bits can be transferred at a time.
- The data lines are bidirectional, so that the data can be sent or received by the processor.
Computer Organization & Architecture

Page 11

Address Bus
- The address lines are used to designate the source or destination of the data on the data
- For example, if the processor wishes to read a word (8, 16 or 32 bits) of data from
memory, it puts the address of the desired word on the address lines.
- Clearly, the width of the address bus determines the maximum possible memory capacity
of the system.
- The address lines are generally also used to address I/O ports.
Control Bus
- The control lines are used to control the access to and the use of the data and address
- Because the data and address lines are shared by all components, there must be a means
of controlling their use.
- Control signals transmit both command and timing information among system modules.
- Timing signals indicate the validity of data and address information.
- Command signals specify operations to be performed. Typical control lines include:
Memory write: Causes data on the bus to be written into the addressed location.
Memory read: Causes data from the addressed location to be placed on the bus.
I/O write: Causes data on the bus to be output to the addressed I/O port.
I/O read: Causes data from the addressed I/O port to be placed on the bus.
Transfer ACK: Indicates that data have been accepted from or placed on the bus.
Bus request: Indicates that a module needs to gain control of the bus.
Bus grant: Indicates that a requesting module has been granted control of the bus.
Interrupt request: Indicates that an interrupt is pending.
Interrupt ACK: Acknowledges that the pending interrupt has been recognized.
Clock: Is used to synchronize operations.
Reset: Initializes all modules.
Multiple - Bus Hierarchies
If a great number of devices are connected to the bus, performance will suffer. There are two
main causes:
- In general, the more devices attached to the bus, the greater the bus length and hence the
greater the propagation delay. This delay determines the time it takes for devices to
coordinate the use of the bus. When control of the bus passes from one device to another
frequently, these propagation delays can noticeably affect performance.
Computer Organization & Architecture

Page 12

- The bus may become a bottleneck as the aggregate data transfer demand approaches the
capacity of the bus. This problem can be countered to some extent by increasing the data
rate that the bus can carry and by using wider buses (e.g., increasing the data bus from 32
to 64 bit). However, because the data rates generated by attached devices (e.g.. graphics
and video controllers, network interfaces) are growing rapidly, this is a race that a single
bus is ultimately destined to lose.
Accordingly, most computer systems use multiple buses, generally laid out in a hierarchy. A
typical traditional structure is shown in Figure 5. There is a local bus that connects the processor
to a cache memory and that may support one or more local devices. The cache memory
controller connects the cache not only to this local bus, but to a system bus to which are attached
all of the main memory modules. The use of a cache structure insulates the processor from a
requirement to access main memory frequently. Hence, main memory can be moved off of the
local bus onto a system bus. In this way, I/O transfers to and from the main memory across the
system bus do not interfere with the processors activity.
It is possible to connect I/O controllers directly onto the system bus. A more efficient solution is
to make use of one or more expansion buses for this purpose. An expansion bus interface buffers
data transfers between the system bus and the I/O controllers on the expansion bus. This
arrangement allows the system to support a wide variety of I/O devices and at the same time
insulate memory-to-processor traffic from I/O traffic.

Figure 5: Traditional Bus Structure

Computer Organization & Architecture

Page 13

Computer Organization & Architecture

Page 14