You are on page 1of 50

CSC 232 Computer Organization

and Architecture

Dr. Charles Nche


Structure of von Neumann machine
1.2 Architectural Development & Styles

• Computer architects have always been striving to


increase the performance of their architectures.
• One philosophy was that by doing more in a single
instruction, one can use a smaller number of
instructions to perform the same job.
• The immediate consequence of this is the need for a
less memory read/write operations and an eventual
speedup of operations.

1.8
1.2 Architectural Development & Styles

• It was also argued that increasing the complexity of


instructions and the number of addressing modes
have the theoretical advantage of reducing the
“semantic gap” between the instructions in a high
level language and those in the low level (machine)
language.
• Machines following this philosophy have been
referred to as complex instructions set computers
(CISCs). Examples of CISC machines include the
Intel Pentium, the Motorola MC68000, and the IBM
1.9 & Macintosh PowerPC .
1.2 Architectural Development & Styles

• A number of studies from the mid 70s and early 80s also
identified that in typical programs, more than 80% of the
instructions executed are those using assignment
statements, conditional branching and procedure calls.
• Simple assignment statements constitute almost 50% of those
operations. These findings caused a different philosophy to
emerge.
• This philosophy promotes the optimization of architectures
by speeding up those operations that are most frequently
used while reducing the instruction complexities and the
number of addressing modes.

1.10
1.2 Architectural Development & Styles

• Machines following this philosophy have been


referred to as reduced instructions set
computers (RISCs). Examples of RISCs
include the Sun SPARC and MIPS machines.
• The two philosophies in architecture design
have lead to the unresolved controversy which
architecture style is "best".
• It should however be mentioned that studies
have indicated that RISC architectures would
indeed lead to faster execution of programs.
1.11
1.3 Technological Development

• Computer technology has shown an


unprecedented rate of improvement. This
includes the development of processors and
memories.
• This impressive increase has been made
possible by the advances in the fabrication
technology of transistors.
• The scale of integration has grown from small
scale (SSI) to medium scale (MSI) to large
scale (LSI) to very large scale integration
1.12(VLSI) and currently to wafer scale integration
(WSI).
1.3 Technological Development

1.13
1.4 Performance Measures

• There are various facets to the performance of a


computer.
• A metric for assessing the performance of a
computer helps comparing alternative designs.
• Performance analysis should help answering
questions such as how fast can a program be
executed using a given computer?
• The clock cycle time is defined as the time
between two consecutive rising edges of a
periodic clock signal.
1.14
1.4 Performance Measures

1.15
1.4 Performance Measures

• We denote the number of CPU clock cycles for


executing a job to be the cycle count (CC), the
cycle time by CT, and the clock frequency by f
= 1/CT.
• The time taken by the CPU to execute a job can
be expressed as: CPU time = CC * CT = CC / f
• It is easier to count the number of instructions
executed in a given program as compared to
counting the number of CPU clock cycles
needed for executing that program.
1.16
1.4 Performance Measures

• Therefore, the average number of clock cycles


per instruction (CPI) has been used as an
alternate performance measure.
• The following equation shows how to compute
the CPI:

1.17
1.4 Performance Measures

• Instruction set of a given machine consists of a


number of instruction categories: ALU (simple
assignment and arithmetic and logic
instructions), load, store, branch, and so on.
• In the case that the CPI for each instruction
category is known, the overall CPI is given by
• Ii is the number of
• times an instruction
• of type i is executed in the program and CPIi is
the average number of clock cycles needed to
1.18execute such instruction.
1.4 Performance Measures

• Example; What is the overall CPI for machine


A for which the following performance
measures were recorded when executing a set
of benchmark programs. Assume that the clock
rate of the CPU is 200 MHz.

1.19
1.4 Performance Measures

• The CPI reflects the organization and the


instruction set architecture of the processor
• The instruction count reflects the instruction set
architecture and compiler technology used.
• This shows the degree of interdependence
between the two performance parameters.
• It is imperative that both the CPI and the
instruction count are considered in assessing
the merits of a given computer or equivalently
in comparing the performance of two
1.20machines.
1.4 Performance Measures

• A different performance measure that has been


given a lot of attention in recent years is MIPS
– million instructions-per-second (the rate of
instruction execution per unit time) and is
defined as follows:

1.21
1.4 Performance Measures

• Example; Suppose that the same set of


benchmark programs considered above were
executed on another machine, call it machine
B, for which the following measures were
recorded.

1.22
1.4 Performance Measures

What is the MIPS rating for the machine A and


machine B assuming a clock rate of 200 MHz?

1.23
1.4 Performance Measures

• Consider, the measurement made on two different machines


running a given set of benchmark programs.

1.24
1.4 Performance Measures

1.25
1.4 Performance Measures

1.26
1.4 Performance Measures

• MFLOP - million floating-point instructions per


second, (rate of floating-point instruction
execution per unit time) has also been used as
a measure for machines‘ performance.
MFLOPS is defined as follows:

1.27
1.4 Performance Measures

• Speedup is a measure of how a machine


performs after some enhancement relative to
its original performance. The following
relationship formulates Amdahl’s law:

1.28
1.4 Performance Measures

• Sometimes it may be possible to achieve


performance enhancement for only a fraction
of time, ∆.
• A formula to relate the speedup, SU∆ due to an
enhancement for a fraction of time ∆ to the
speedup due to an overall enhancement, SUo .
This relationship can be expressed as

1.29
1.4 Performance Measures

• Consider a machine for which a speedup of 30


is possible after applying an enhancement. If
under certain conditions the enhancement was
only possible for 30% of the time, what is the
speedup due to this partial application of the
enhancement?

1.30
1.4 Performance Measures

• The above formula can be generalized to


account for the case whereby a number of
different independent enhancements can be
applied separately and for different fractions of
the time, ∆1, ∆2, . . . , ∆n, thus leading
respectively to the speedup enhancements
SU∆1 , SU∆2 , . . . , SU∆n .

1.31
1.5 Summary
• A brief historical background for the
development of computer systems was
provided, starting from the first recorded
attempt to build a computer, the Z1, in 1938,
passing through the CDC 6600 and the Cray
supercomputers, and ending up with today's
modern high performance machines.
• Then a discussion was provided on the RISC
versus CISC architectural styles and their
impact on machine performance.
1.32
1.5 Summary
• This was followed by a brief discussion on the
technological development and its impact on
computing performance.
• This chapter was concluded with a detailed
treatment of the issues involved in assessing
the performance of computers.
• Possible ways of evaluating the speedup for
given partial or general improvement
measurements of a machine were also
discussed.
1.33
Speeding it up
• Pipelining
• On board cache
• On board L1 & L2 cache
• Branch prediction
• Data flow analysis
• Speculative execution
Performance Balance
• Processor speed increased
• Memory capacity increased
• Memory speed lags behind processor
speed
Logic and Memory Performance Gap
Solutions
• Increase number of bits retrieved at one
time
—Make DRAM “wider” rather than “deeper”
• Change DRAM interface
—Cache
• Reduce frequency of memory access
—More complex cache and cache on chip
• Increase interconnection bandwidth
—High speed buses
—Hierarchy of buses
I/O Devices
• Peripherals with intensive I/O demands
• Large data throughput demands
• Processors can handle this
• Problem moving data
• Solutions:
—Caching
—Buffering
—Higher-speed interconnection buses
—More elaborate bus structures
—Multiple-processor configurations
Typical I/O Device Data Rates
Key is Balance
• Processor components
• Main memory
• I/O devices
• Interconnection structures
Improvements in Chip Organization and
Architecture
• Increase hardware speed of processor
—Fundamentally due to shrinking logic gate size
– More gates, packed more tightly, increasing clock
rate
– Propagation time for signals reduced
• Increase size and speed of caches
—Dedicating part of processor chip
– Cache access times drop significantly
• Change processor organization and
architecture
—Increase effective speed of execution
—Parallelism
Problems with Clock Speed and Logic
Density
• Power
— Power density increases with density of logic and clock
speed
— Dissipating heat
• RC delay
— Speed at which electrons flow limited by resistance and
capacitance of metal wires connecting them
— Delay increases as RC product increases
— Wire interconnects thinner, increasing resistance
— Wires closer together, increasing capacitance
• Memory latency
— Memory speeds lag processor speeds
• Solution:
— More emphasis on organizational and architectural
approaches
Intel Microprocessor Performance
Increased Cache Capacity
• Typically two or three levels of cache
between processor and main memory
• Chip density increased
—More cache memory on chip
– Faster cache access
• Pentium chip devoted about 10% of chip
area to cache
• Pentium 4 devotes about 50%
More Complex Execution Logic
• Enable parallel execution of instructions
• Pipeline works like assembly line
—Different stages of execution of different
instructions at same time along pipeline
• Superscalar allows multiple pipelines
within single processor
—Instructions that do not depend on one
another can be executed in parallel
Diminishing Returns
• Internal organization of processors
complex
—Can get a great deal of parallelism
—Further significant increases likely to be
relatively modest
• Benefits from cache are reaching limit
• Increasing clock rate runs into power
dissipation problem
—Some fundamental physical limits are being
reached
New Approach – Multiple Cores
• Multiple processors on single chip
— Large shared cache
• Within a processor, increase in performance
proportional to square root of increase in
complexity
• If software can use multiple processors, doubling
number of processors almost doubles
performance
• So, use two simpler processors on the chip
rather than one more complex processor
• With two processors, larger caches are justified
— Power consumption of memory logic less than
processing logic
• Example: IBM POWER4
— Two cores based on PowerPC
POWER4 Chip Organization
Pentium Evolution (1)
• 8080
— first general purpose microprocessor
— 8 bit data path
— Used in first personal computer – Altair
• 8086
— much more powerful
— 16 bit
— instruction cache, prefetch few instructions
— 8088 (8 bit external bus) used in first IBM PC
• 80286
— 16 Mbyte memory addressable
— up from 1Mb
• 80386
— 32 bit
— Support for multitasking
Pentium Evolution (2)
• 80486
—sophisticated powerful cache and instruction
pipelining
—built in maths co-processor
• Pentium
—Superscalar
—Multiple instructions executed in parallel
• Pentium Pro
—Increased superscalar organization
—Aggressive register renaming
—branch prediction
—data flow analysis
—speculative execution
Pentium Evolution (3)
• Pentium II
— MMX technology
— graphics, video & audio processing
• Pentium III
— Additional floating point instructions for 3D graphics
• Pentium 4
— Note Arabic rather than Roman numerals
— Further floating point and multimedia enhancements
• Itanium
— 64 bit
— see chapter 15
• Itanium 2
— Hardware enhancements to increase speed
• See Intel web pages for detailed information on
processors
PowerPC
• 1975, 801 minicomputer project (IBM) RISC
• Berkeley RISC I processor
• 1986, IBM commercial RISC workstation product, RT PC.
— Not commercial success
— Many rivals with comparable or better performance
• 1990, IBM RISC System/6000
— RISC-like superscalar machine
— POWER architecture
• IBM alliance with Motorola (68000 microprocessors), and
Apple, (used 68000 in Macintosh)
• Result is PowerPC architecture
— Derived from the POWER architecture
— Superscalar RISC
— Apple Macintosh
— Embedded chip applications
PowerPC Family (1)
• 601:
— Quickly to market. 32-bit machine
• 603:
— Low-end desktop and portable
— 32-bit
— Comparable performance with 601
— Lower cost and more efficient implementation
• 604:
— Desktop and low-end servers
— 32-bit machine
— Much more advanced superscalar design
— Greater performance
• 620:
— High-end servers
— 64-bit architecture
PowerPC Family (2)
• 740/750:
—Also known as G3
—Two levels of cache on chip
• G4:
—Increases parallelism and internal speed
• G5:
—Improvements in parallelism and internal
speed
—64-bit organization
Internet Resources
• http://www.intel.com/
—Search for the Intel Museum
• http://www.ibm.com
• http://www.dec.com
• Charles Babbage Institute
• PowerPC
• Intel Developer Home

You might also like