You are on page 1of 29

COMPUTER PERFORMANCE

ENHANCING TECHNIQUES
Ms.P.Latha M.E(Ph.D)
Associate Professor,
Department Of Electronics and Communication Engineering,
R.M.K. Engineering College
A BRIEF HISTORY OF COMPUTERS
First Generation: Vacuum Tubes
ENIAC The ENIAC (Electronic Numerical Integrator And
Computer), designed and constructed at the University of
Pennsylvania, was the world’s first general-purpose electronic
digital computer. The project was a response to U.S needs
during World War II.

This Photo by Unknown Author is licensed under


CC BY-SA
THE SECOND GENERATION: TRANSISTORS
 The first major change in the electronic computer came with
the replacement of the vacuum tube by the transistor.
 The transistor is smaller, cheaper, and dissipates less heat
than a vacuum tube but can be used in the same way as a
vacuum tube to construct computers. Unlike the vacuum
tube, which requires wires, metal plates, a glass capsule, and
a vacuum, the transistor is a solid- state device, made from
silicon. The transistor was invented at Bell Labs in 1947 and
by the 1950s had launched an electronic revolution.
 Ex: IBM 7094
THE THIRD GENERATION: INTEGRATED CIRCUITS

Later generations, based on advances in integrated circuit


technology.
With the introduction of large-scale integration (LSI), more
than 1000 components can be placed on a single
integrated circuit chip.
Very-large-scale integration (VLSI) achieved more than
10,000 components per chip,
while current ultra-large-scale integration (ULSI) chips can
contain more than one million components
COMPONENTS OF A COMPUTER
A computer system consists of
both hardware and information
stored on hardware. Information
stored on computer hardware is
often called software.
 The hardware components of a
computer system are the
electronic and mechanical parts.
The software components of a
computer system are the data
and the computer programs.
COMPONENTS OF A COMPUTER CONTD..

The processor gets instructions and data from memory.


Input writes data to memory, and output reads data
from memory. Control sends the signals that
determine the operations of the data path, memory,
input, and output.
Program – A list of instructions that performs a task.
INPUT UNITS
Readsthe data
examples= keyboard, joysticks, , computer
mouse – graphical input devices
examples = microphones – captures audio input
OUTPUT UNIT

 Itsfunction is to send processed results to the outside world.


 E.g. printers, Display, Projectors

This Photo BY-SA-NC


MEMORY

 Store programs and data


 Two classes of storage: primary and secondary
 Primary storage
 Fast memory, expensive
 Memory contains a number of semiconductor storage cells, that
can store one bit of information.
 Group of cells called words (n bits)
 Each memory word location has a distinct address. Addresses are
numbers that identify successive locations.
 Number of bits in each word is word length
 Instructions and data can be written into the memory or read out
under the control of the processor
COMPUTER MEMORIES
ARITHMETIC AND LOGIC UNIT (ALU)

 Computer programs are executed in the ALU


 Arithmetic or logic operations(e.g.) multiplication, division or
comparison of numbers are performed by the ALU
 The control and arithmetic and logic units are many times
faster than other devices.
 The operands for operations are stored in high-speed storage
elements called registers.
 Each register can store one word of data.
 Access times to registers are somewhat faster than access
times to the fastest cache unit in the memory hierarchy
CONTROL UNITS

 The memory, arithmetic and logic, and input and output units store and
process information and perform input and output operations.
 The control unit coordinates these operations and is the nerve centre that
sends control signals to other units and senses their states.
 The timing signals that govern the I/O transfers are generated by the
control circuits.
 Timing signals also control data transfer between the processor and the
memory.
 Timing signals are the signals that determine when a given action is to take
place.
 A physically separate unit that interacts with other parts of the machine.
 A large set of control lines (wires) carries the signals used for timing and
synchronization of events in all units.
EIGHT IDEAS –TO IMPROVE PERFORMANCE
 1. Design for Moore's Law

 The one constant for computer designers is rapid change, which is driven
largely by Moore's Law.
 It states that integrated circuit resources double every 18–24 months.
 Moore's Law resulted from a 1965 prediction of such growth in IC capacity
Moore's Law made by Gordon Moore, one of the founders of Intel.
As computer designs can take years, the resources available per chip can
easily double or quadruple between the start and finish of the project.
 Computer architects must anticipate this rapid change.
 Icon used: "up and to the right" Moore's Law graph represents designing for
rapid change.
2. USE ABSTRACTION TO SIMPLIFY DESIGN

 Both computer architects and programmers had to invent techniques to


make themselves more productive.

 A major productivity technique for hardware and soft ware is to use


abstractions to represent the design at different levels of representation;

 lower-level details are hidden to offer a simpler model at higher levels.

 Icon used: abstract painting icon.


3. MAKE THE COMMON CASE FAST

Making the common case fast will tend to enhance


performance better than optimizing the rare case.
 The common case is often simpler than the rare
case and it is often easier to enhance
 Common case fast is only possible with careful
experimentation and measurement.
 Icon used: sports
4.PERFORMANCE VIA PARALLELISM

computer architects have offered designs that get


more performance by performing operations in
parallel.
Icon Used: multiple jet engines of a plane is the
icon for parallel performance.
5. PERFORMANCE VIA PIPELINING

 Pipelining-Pipelining is an implementation technique


in which multiple instructions are overlapped in
execution. Pipelining improves performance by
increasing instruction throughput.
 For example, before fire engines, a human chain
can carry a water source to fire much more quickly
than individuals with buckets running back and forth.
Icon Used:pipeline icon is used. It is a sequence of
pipes, with each section representing one stage of
the pipeline.
6. PERFORMANCE VIA PREDICTION

 Following the saying that it can be better to ask for


forgiveness than to ask for permission, the next great idea is
prediction.

 In some cases it can be faster on average to guess and start


working rather than wait until you know for sure.

 This mechanism to recover from a misprediction is not too


expensive and the prediction is relatively accurate.

 Icon Used:fortune-teller's crystal ball ,


7.HIERARCHY OF MEMORIES

 Programmers want memory to be fast, large, and cheap memory.


 Architects have found that they can address these conflicting
demands with a hierarchy of memories the fastest, smallest, and
most expensive memory per bit is at the top of the hierarchy the
slowest, largest, and cheapest per bit is at the bottom.
 Caches give the programmer the illusion that main memory is
nearly as fast as the top of the hierarchy and nearly as big and
cheap as the bottom of the hierarchy.
 Icon Used: a layered triangle icon represents the memory
hierarchy.
 The shape indicates speed, cost, and size: the closer to the top,
the faster and more expensive per bit the memory; the wider the
base of the layer, the bigger the memory.
8. DEPENDABILITY VIA REDUNDANCY

 Computers not only need to be fast; they need to be


dependable.
 Since any physical device can fail, systems can made
dependable by including redundant components.
 These components can take over when a failure occurs and
to help detect failures.
 Icon Used:the tractor-trailer , since the dual tires on each
side of its rear axels allow the truck to continue driving even
when one tire fails.
TECHNOLOGY

 Processors and memory have


improved at an incredible rate,
the race to design a better
computer.
 A transistor is simply an on/off
switch controlled by electricity.
 The IC combined dozens to
hundreds of transistors into a
single chip. When Gordon Moore
predicted the continuous
doubling of resources, he was
predicting the growth rate of the
number of transistors per chip.
MANUFACTURING PROCESS OF IC
PERFORMANCE

 Performance is defined by speed i.e. how fast a program gets


executed in a desktop or the number of jobs completed in a day
for a server.
 Response time (Execution time / Elapsed time) — The total time required for
the computer to complete a task, including disk accesses, memory accesses,
I/O activities, operating system overhead, CPU execution time, and so on i.e.
the time between the star
 t and completion of a task.
 Throughput (bandwidth) — it is the number of tasks completed per unit time.
 A faster processor improves response time and throughput and
additional processors to a system improves throughput. To
maximize performance we should minimize response time.
THE RELATION BETWEEN PERFORMANCE AND
EXECUTION TIME FOR A COMPUTER X:
MEASURING PERFORMANCE:
 Time is the measure of computer performance.
 Program execution time is measured in seconds per program.
 CPU execution time (or CPU time), is the time, the CPU spends
computing for the specific task and does not include time spent waiting
for I/O or running other programs. CPU time can be further divided into:
 (i) User CPU time :The CPU time spent in a program itself.
 (ii) System CPU time: The CPU time spent in the operating system
performing tasks on behalf of the program.
 Since it is difficult to measure and differentiate system and
user CPU times , performance is based on elapsed time and
that based on CPU execution time.
CPU PERFORMANCE AND ITS FACTORS
 Clock cycle (Also called tick,
clock tick, clock period, clock,
or cycle)-
 The time for one clock period,
usually of the processor clock,
which runs at a constant rate.
 Clock period (P): The length of
each clock cycle Clock Rate
(R) = 1 /P (i.e.) inverse of
clock period
Performance can be improved by
Reducing the number of clock cycles required for a program or Reducing the clock period.
CLOCK CYCLES PER INSTRUCTION (CPI):
 Since different instructions
may take different amounts
of time depending on what
they do, CPI is an average
number of clock cycles per
instruction for a program or
program fragment.
 Basic performance equation
(Classical CPU Performance
Equation) in terms of
instruction count (the number
of instructions executed by
the program), CPI, and clock
cycle time:
COMPONENTS AFFECT THE FACTORS IN THE
CPU PERFORMANCE EQUATION

 Algorithm – affects Instruction count & CPI


 Programming Language – affects Instruction count & CPI
 Hardware – affects clock rate
 Compiler – affects Instruction count & CPI
 Instruction set architecture – affects Instruction count, clock
rate & CPI [CISC – Complex Instruction Set Architecture &
RISC – Reduced Instruction Set Architecture]
 Cache memory
 Pipelining and superscalar processors

You might also like