You are on page 1of 15

DDR Memory

Technology

White Paper

This White Paper provides a comprehensive overview of Memory technology, with a focus on
DDR DRAM.

August 2001
© 2001 Acer Incorporation. All rights reserved.
This paper is for informational purposes only. ACER MAKES NO WARRANTIES,
EXPRESS OR IMPLIED, IN THIS DOCUMENT.
Acer, Acer Altos are registered trademarks or trademarks of Acer Incorporation.
Microsoft, Windows 2000 Advanced Server, Windows 2000 Datacenter Server,
and AMI product range are either registered trademarks or trademarks of
Microsoft Corporation.
Other product or company names mentioned herein may be the trademarks of
their respective owners.
CONTENTS MEMORY TECHNOLOGY BASICS ...................................................... 1
ROM 1
PROM (Programmable ROM) 1
EPROM (Erasable Programmable ROM) 1
EEPROM (Electrically Erasable Programmable ROM) 1
RAM 1
SRAM (Static RAM) 1
DRAM (Dynamic RAM) 2

DRAM TYPES.................................................................................... 3
Asynchronous DRAM model s 3
FPM (Fast Page Mode) 3
EDO (Extended Data Output) 3
BEDO (Burst Extended Data Output) 3
SDRAM (Synchronous DRAM) Models 4
About SDRAM Speed Ratings 4
DRDRAM (Direct Rambus) 5
SLDRAM (Synchronous Link) 5
DDR SDRAM (Double Data Rate Synchronous) 5

ABOUT DDR SDRAM......................................................................... 7


Speed Rating 7
About Unbuffered and Registered modules 7
Unbuffered DIMMs 7
Registered DIMMs 8
Data Integrity 8

CONCLUSION..................................................................................10

FOR MORE INFORMATION..............................................................11


MEMORY TECHNOLOGY While the complex world of main system Memory technology can be regarded
BASICS from many angles, it is beneficial to begin with a look at two large-scale
perspectives, ROM (Read-Only Memory) and RAM (Random Access Memory).
An important distinction between the two is that RAM is volatile memory, that
is, any data existing on the memory is erased when the system shuts down. By
comparison, ROM memory is referred to as nonvolatile, meaning its data
content remains intact, irrespective of shutdown and boot activity. Note that
RAM and ROM are commonly used terms, and are used here, although both
types actually allow random access.

ROM
As the name implies, ROM memory can only be read in operation, preventing
the re-writing of contents as part of its normal function. Basic ROM stores
critical information in computers and other digital devices, information whose
integrity is vital to system operation and is unlikely to change. Other commonly
used forms of ROM are:

PROM (Programmable ROM)


Programmable ROM is a blank memory chip open for one-time only recording
of information that is then stored permanently.

EPROM (Erasable Programmable ROM)

This is PROM equipped with a special sensor to which application of a UV light


erases stored information, allowing subsequent rewriting.

EEPROM (Electrically Erasable Programmable ROM)

This type of ROM is rewriteable by way of software. It is used in flash BIOS, in


which the software allows users to upgrade the stored BIOS information
(flashing).

RAM
The Random component of RAM’s name actually refers to this type of
memory’s ability to access any single byte of information without contacting or
affecting neighboring bytes. RAM plays a major role in system operations, and
specifically performance. Essentially, the more complex a program is, the more
its execution will benefit from the present of both ample and efficient RAM
access.

RAM takes two forms, SRAM (Static RAM), and DRAM (Dynamic RAM). A
detailed discussion of both follows.

SRAM (Static RAM)

Static RAM provides significant advantages for performance, in that it holds


data without needing to be refreshed constantly. This allows access times as
low as 10 ns as well as a shorter cycle time.

RAID Technology Wh ite Paper


1
A disadvantage is Static RAM’s high cost to produce, limiting most of its
practical applications to memory caching functions. There are three types of
SRAM, Async RAM, Sync RAM, and Pipeline Burst RAM.
Async SRAM

The designation Async is short for Asynchronous, meaning here that the SRAM
functions with no dependence on the system clock. Async SRAM is an older
type of SRAM, normally limited to L2 cache only.

Sync SRAM

Obviously, this is a synchronized form of SRAM, matched to the system clock’s


operations. The marked speed advantage of this feature also dictates a higher
cost.

Pipeline Burst SRAM

Most common, this type of SRAM directs larger data packets of data (requests)
to the memory (pipelining) all at the same time, allowing a much quicker
reaction on the part of the RAM, thus increasing access speed. This type of
SRAM accommodates bus speeds in excess of 66MHz, hence its popularity.

DRAM (Dynamic RAM)

Dynamic RAM derives its name form its requirement for constant refreshing.
While the raw speed consideration renders it a secondary choice to SRAM, its
superior cost advantages make it a vastly more popular choice for system
memory.
A more detailed look at the various types of DRAM follows.

Acer White Paper


2
DRAM TYPES In the last few years, a tremendous upsurge has occurred in the evolution of
DRAM technology. Where originally, nearly all PC system memory was confined
to FPM (Fast Page Mode) DRAM. Rapid growth in both CPU and motherboard
bus speeds, however, spurred development of improved DRAM methods to
provide performance equal to faster and faster system capabilities. Numerous
options are now, and have been, available, some popular, some less so. They
fall into two main categories, Asynchronous and Synchronous, with the latter
taking the lead in recent years.

Asynchronous DRAM models


FPM (Fast Page Mode)

An improvement on its Page Mode Access predecessor, FPM quickly became the
most widely utilized access mode for DRAM, almost universally installed for a
time, and still widely supported. Its bypassing of power-hungry sense and
restore current is a major benefit. With speeds originally limited to around 120
ns, and later improvements to as low as 60, FPM was still unable to keep up
with the soon-to-be ubiquitous 66 MHz system bus. Presently, FPM is rarely
used, having been usurped by several superior technologies. Ironically, its rarity
of deployment has resulted in it often being more costly than other forms.

EDO (Extended Data Output)

EDO, or HyperPage mode, as it is sometimes called, was the last significant


improvement to be made available to DRAM customers before the
development of synchronous DRAM interface technology.

Its advantage over FPM lies in its ability, by not turning off the output buffers,
to allow an access operation to commence prior to the completion of the
previous operation. This provides a performance improvement over FPM DRAM,
with no increase in silicon usage, and hence, package size.

Improvements in the 30-40% range are seen in the implementation of EDO


DRAM. Additionally, it supports memory bus speeds up to 83 MHz, sacrificing
little or no performance capability. Given proper chip support, even 100 MHz
bus speeds are accessible, although with much lower results than the newer
synchronous forms.

Users whose bus speed requirements are no higher than 83 MHz should see no
clear advantage in upgrading from EDO to synchronous embodiments.

Nonetheless, the rise in widely available higher chipset capabilities has, for the
most part, left EDO in the past alongside FPM DRAM.

BEDO (Burst Extended Data Output)

BEDO was a largely unsuccessful attempt to further improve on EDO


performance.
It utilized a burst mode combined with dual bank architecture to beat EDO

RAID Technology Wh ite Paper


3
capabilities. Simply put, the BEDO advantage lay in its ability to prepare 3
subsequent addresses internally following an initial address input. This
technique allowed it to overcome time delays resulting from the input of each
new address.

The timing of its development was, however, poor, in that by the time it
became a possible alternative, most large manufacturers had devoted most of
their development energies towards SDRAM and related advancement.
Consequently, industry wisdom, led by Intel, negated the viability of BEDO as
an acceptable solution, irrespective of its possible merits.

SDRAM (Synchronous DRAM) Models


As parallel advances made it clear that the future belonged to bus speeds in
excess of 66 MHz, it became incumbent upon developers to overcome latency
problems inherent in existing DRAM forms. The development of workable
solutions in the area of synchronous operations was the result, accomplishing
not only the ability to embrace higher bus speeds, but other advantages as well.

About SDRAM Speed Ratings

It is worthwhile to be aware of the conventions used for rating the relative


speed of SDRAM modules.

While ratings for asynchronous DRAM are listed in nanoseconds, an additional


MHz rating is applied to SDRAM, since its synchronous nature requires that it
be compatible with system bus speed.

Unfortunately, exactly matching exact SDRAM ratings with bus speeds, that is,
for example, to say that a 100 MHz/10 ns-rated SDRAM speed will operate
properly with a 100 MHz system speed, is not practically correct. This stems
from the fact that the SDRAM MHz rating refers to optimum conditions in the
system, a scenario that rarely, if ever, occurs in real-world operations. More
operable would be to “over-match” the SDRAM rating to compensate. Thus, a
much more realistic qualification is undertaken by manufacturers, in which
100MHz SDRAM is intended for operation with a system speed of, say, 83 MHz.

To make the entire process easier, Intel implemented a Speed Rating system for
SDRAM qualification, its now universally exercised “PC” rating. The
qualification, intended as an aid to both manufacturers and users, takes into
account the above noted considerations in addition to other internal timing
characteristics. As much as any across-the-board rating system, this scale comes
very close to assuring the compatibility of SDRAM with its host system speed.
According to the Intel system, a PC100-compliant SDRAM module will perform
well with a 100 MHz system. Likewise its PC66 and PC133 ratings. The ratings
also guarantee compatibility with Intel protocols, such as PC100 with Intel BX
motherboard.

Acer White Paper


4
DRDRAM (Direct Rambus)

Rambus is the name for a complete interface design developed (by, not
surprisingly, Rambus Inc.) with the full endorsement of Intel as a replacement
for DDR SDRAM.

As opposed to making improvements upon the existing SDRAM platform,


Rambus technology actually rebuilds the system as we know it, and is available
only on a special RIMM (Rambus Inline Memory Module), incompatible with
non-Rambus DIMMs (Dual Inline Memory Modules). It utilizes a memory bus
narrowed to 16 bits, running it at speeds as high as 800 MHz. The reduced bus
profile makes room for multiple channels run in parallel, hence the higher
speeds.

As in most cases of vast performance gains, there are numerous qualifications


to the system. Physical architecture issues are one point, since the high speeds
make the system especially prone to EMI, and significant increase in heat
generation is another. These particular liabilities have been overcome through
the installation of reduced wire lengths, enhanced shielding, and special
heatsinks, respectively. Other concerns have yet to be addressed, not least of
which is an even higher than usual latency problem. Costs are also considerably
higher.

Finally, and perhaps most significant, is the fact that, unlike any other currently
known solutions, Rambus technology is wholly proprietary. The requirement
for manufacturers implementing the technology to pay licensing fees to Intel
and Rambus is a serious stroke against the technology, as is the fact that the
fees are passed along to the user in increased unit costs. Despite Intel’s (to say
the least) wholehearted support of the solution, analysts remain uncommitted
to its potential for wide acceptance.

SLDRAM (Synchronous Link)

Originally a very strong contender against DRDRAM, Synchronous Link had the
distinct advantage, first and foremost, of being an open industry standard,
obviating the various difficulties inherent with Rambus’ proprietary nature.

Additionally, SLDRAM required no redesign of RAM chips. With a 200MHz clock


speed for its 64-bit bus combining with the same double transfer per clock cycle
as DDR SDRAM, it achieved an effective speed of 400 MHz. The cumulative
theoretical bandwidth of around 3.2 GB per second was about double that of
Rambus.

Unfortunately, major players’ lack of widespread support for SLDRAM, in favor


of the Rambus technology, resulted in this model effectively disappearing from
the field. This left no strong contender in the synchronous DRAM category
except DDR DRAM, as follows.
DDR SDRAM (Double Data Rate Synchronous)

Presently accepted as the de facto industry standard for system memory, DDR

RAID Technology Wh ite Paper


5
SDRAM provides the best available balance (at this writing) between
performance capability and industry-wide chipset support.

DDR SDRAM carries two basic advantages in operating technology over regular
SDRAM.

First, as its name indicates, DDR SDRAM uses improved signaling technology
(using Stub Series Terminated Logic, or SSTL, rather than SDRAM’s TTL/LVTTL)
to activate output on both edges of the system clock, rising and falling. This
effectively doubles the operating bandwidth with the same clock frequency.

The doubling of clock speeds is somewhat deceptive, since the presence of the
system cache can mask its actual performance. While the DDR architecture does
drive the data bus at the higher speeds, the command bus may be unable to
keep up. Additionally, deployment with a (relatively) slower bus speed such as
66 MHz will limit the observable advantage. Hence, it is only in the presence of
more powerful system bus speeds, 100 or 133 MHz, (quickly growing in
popularity and use) that the true performance difference of DDR SDRAM can
be appreciated.

The second key recommendation for DDR SDRAM memory is its use of a Delay-
Locked Loop (DLL), which provides a DataStrobe signal with the validation of
data on the pins. The signal is used once for every 16 output operations,
providing a more precise location of data as well as a sharper synchronization
of incoming messages. This increased acuity also allows the synchronized
unification of unlike memory modules.

The following section details more of DDR SDRAM’s features and benefits.

Acer White Paper


6
ABOUT DDR SDRAM This section provides a more detailed view of DDR SDRAM (and in some cases,
simply DRAM), its advantages to users, especially those in data-intensive
environments.
Some topological differences between DDR SDRAM and SDRAM DIMM
modules are:

SDRAM DDR
Profile

Voltage 3.3 2.5

Pin 168 184


count
Notches 2 1

Consequently, DDR is not backward compatible with non-DDR SDRAM-based


systems, entailing a complete replacement of modules.

Speed Rating
JEDEC, the semiconductor engineering standardization body of the Electronic
Industries Alliance (EIA) recommends the following rating system for SDRAM
and DDR speeds.
The peak performance for data transfer is 200 MHz, based on a 100 MHz bus
speed, multiplied by DDR’s doubling capabilities. As mentioned previously,
however, the command bus is limited to single clock tick transactions.
Chips are, of course, designated by native clock speeds, so 200MHz DDR SDRAM
chips are designated as DDR200, and 266MHz DDR SDRAM chips as DDR266.
DDR DIMMs use peak bandwidth (the most data deliverable per second) as a
designator. The formula can be seen in the following example using a 266MHz
DDR DIMM:
(module width) 8 bytes X (rated speed) 266 MHz/sec = 2128 MB/sec, or approx.
2.1 GB/sec.
Thus, the module would usually be rated as a PC2100.
Accordingly, a PC1600 DIMM would use chips rated at 200 MHz.

About Unbuffered and Registered modules


Generally, in the present marketplace, different implementations of DIMM
memory are available in two major types, Unbuffered and Registered.

Unbuffered DIMMs

Unbuffered modules allow unchecked throughput, and are designed for lower

RAID Technology Wh ite Paper


7
traffic personal PC and entry-level server/workstation environments. A low
likelihood of massive traffic on these systems means that the Unbuffered, more
economically priced, module is sufficient.
Registered DIMMs

Registered modules carry a higher price tag than the Unbuffered alternative, as
they provide a significantly enhanced level of data overload protection.

All modules installed must be registered. The system will not allow mixing of
Registered and Unbuffered modules.

The designation of this DIMM type comes from its application of registers to
signals. These registers, by applying a one-cycle delay of all data delivered to
the module, boost the clock signal over the complete RAM array, so there is no
deterioration of the signal over the complete path. The result is a much clearer
signal being delivered to the system. This process amplifies the bus signals,
allowing fewer transactions with the same result, supporting 32 or more chips.
Thus, the threat of overloaded signal paths is avoided.
This assurance makes Registered DDR SDRAM the memory model of choice for
any data-critical application.

Data Integrity
Original RAM modules (especially those of lower quality) presented users with
a serious potential for error, since the individual bits could often return
erroneous information only occasionally, making identification of the source
extremely difficult. The most basic solution to this was to create a parity-based
correction system to RAM modules. This measure added an extra bit to each of
the 8-bit sections used to store each byte of data, with the extra bit used to
build parity. The resulting 9-bit segments function normally, but the built-on
parity bit, when supported, performs parity checking of errors.

Some drawbacks inherent in parity solutions for RAM include its inability to
actually correct errors, failure to detect even-bit errors (since the two errors
cancel each other), the tendency to generate shutdown when encountering
errors, and extra cost.

Parity RAM is rarely seen in the present marketplace, since advances in RAM
manufacture have resulted in a generally higher level of quality. Thus, PC users
will, in most cases, get adequate error protection with newer, higher quality
non-parity RAM modules.

Server-based systems running mission-critical applications, however, require the


presence of RAM-based data integrity support.

Presently, ECC-supported RAM provides this support. One significant


improvement over parity is ECC’s ability to detect and repair multiple-bit errors.
In normal applications, to conserve time and operating costs, most

Acer White Paper


8
environments implement single-bit repair and double-bit reporting.

With ECC DRAM, readily identifiable by the presence of 9 chips rather than 8,
the check and correct functions take place not on the module, but rather on
the system board. The memory module simply provides the necessary space for
the storage of the information supplied by the process.

ECC-equipped DDR SDRAM has become a nearly universal solution for server-
based systems.

RAID Technology Wh ite Paper


9
CONCLUSION In conclusion, server-based systems greatly benefit from the implementation of
registered DDR SDRAM with ECC support.

This consideration supports both cost concerns, and optimum preparedness for
future developments.

Further confirmation of the suitability of this solution was announced in Spring


2001 at the Intel Developer Forum’s Memory Session. The session validated the
migration in server-based systems from SDRAM to DDR SDRAM.

Acer White Paper


10
FOR MORE For more detailed information regarding DDR Memory Technology, please visit
INFORMATION the Website of the Joint Electron Device Engineering Council (JEDEC) at:
http://www.jedec.org/

RAID Technology Wh ite Paper


11

You might also like