You are on page 1of 3

Introduction :

The von Neumann bottleneck is a limitation on throughput caused by the standard personal
computer architecture.
The term is named for John von Neumann, who developed the theory behind the architecture of
modern computers. Earlier computers were fed programs and data for processing while they
were running. Von Neumann came up with the idea behind the stored program computer, our
standard model, which is also known as the von Neumann architecture. In the von Neumann
architecture, programs and data are held in memory; the processor and memory are separate and
data moves between the two. In that configuration, latency is unavoidable.
Furthermore, in recent years, processor speeds have increased significantly. Memory
improvements, on the other hand, have mostly been in density the ability to store more data in
less space rather than transfer rates. As speeds have increased, the processor has spent an
increasing amount of time idle, waiting for data to be fetched from memory. No matter how fast
a given processor can work, in effect it is limited to the rate of transfer allowed by the bottleneck.
Often, a faster processor just means that it will spend more time idle.
More About Bottleneck:
Through the years, a variety of problems have plagued the development of faster, smaller, and
cheaper computer hardware. Generally, the faster and smaller the component, the more it would
cost. Fortunately, over the years computer hardware has improved roughly according to the
predictions of Intel co-founder Gordon Moore, who predicted that the number of transistors that
would fit on a single integrated circuit would double every two years. However, in order to reap
the full benefits of this rapid growth, developers have been faced with the task of overcoming
one of the oldest, most pervasive problems in computer hardware the von Neumman
bottleneck.
The Von Neumann Bottleneck is named for John von Neumann, who designed the standard
model of computing that is used today. Programs and data are held together in memory; the
processor and memory are separate with data moving between them. (In contrast, program and
data memory are separate in Harvard architecture.) Unfortunately, von Neumann architecture
results in a limitation that has to be dealt with through significant changes to computer or
processor architectures; the limitation is known von Neumanns Bottleneck.
The bottleneck occurs when the processor is idle while it waits for data to be fetched from
memory. As processor speeds increase while memory transfer rates remain relatively stable, the
processor in a given computer remains limited to the rate of transfer permitted by the bottleneck.
Programmers have also been criticized for thinking in terms of the stored-program model,

because it means that they spend significant time trying to optimize code to prevent data from
being pushed back and forth.
The von Neumann bottleneck has often been considered a problem that can only be overcome
through significant changes to computer or processor architectures.
The Von Neumann Bottleneck is a term coined back in 1977 by John Backus.
It refers to two things:

A systems bottleneck, in that the bandwidth between CPUs and Random-Access Memory is
much lower than the speed at which a typical CPU can process data internally.
The 'intellectual bottleneck' in that programmers at the time spent a lot of time thinking about
code optimisation to stop 'lots of words' being pushed back and forth between CPU and
RAM.

In terms of the systems bottleneck, a lot has been done:

CPU Caches
Branch predictor logic
Many types of Parallel Computing

In terms of the intellectual bottleneck, modern programming languages tend to abstract away the
concepts of memory reads and the like ... and the vast majority of programmers can focus at
higher levels of abstractions.
But, to a large extent, Moore's Law (and its friends) have accelerated away at lot of the problems
- you don't need to think about programming efficiency for 90% of code - you can just throw a
faster CPU bus/bigger L2 cache at the problem ... or just wait a couple of years until entry level
chips are fast enough
Procedures for overcoming von Neumann bottleneck
Approaches to overcoming the von Neumann bottleneck include:
Caching -- the storage of frequently used data in a special area (usually RAM), so that it is more
readily accessible than if it were stored in main memory.
Prefetching -- moving some data into cache before it is requested to speed access in the event of
a request.
Multithreading -- managing multiple requests simultaneously in separate threads.

New types of RAM (random access memory) -- for example, DDR SDRAM, which activates
output on both the rising and falling edge of the system clock rather than on just the rising edge,
to potentially double output.
RAMBUS -- a memory subsystem consisting of the RAM, the RAM controller, and the bus
(path) connecting RAM to the microprocessor and devices in the computer that use it.
Processing in memory (PIM), which integrates a processor and memory in a single microchip.

References:
http://whatis.techtarget.com/definition/von-Neumann-bottleneck
http://www.leadsdigital.com/how-to-overcome-von-neumanns-bottleneck/
Damian Miller, Reconfigurable Systems: A Potential Solution to the von Neumann Bottleneck,
Liberty University