Professional Documents
Culture Documents
Technology
White Paper
This White Paper provides a comprehensive overview of Memory technology, with a focus on
DDR DRAM.
August 2001
© 2001 Acer Incorporation. All rights reserved.
This paper is for informational purposes only. ACER MAKES NO WARRANTIES,
EXPRESS OR IMPLIED, IN THIS DOCUMENT.
Acer, Acer Altos are registered trademarks or trademarks of Acer Incorporation.
Microsoft, Windows 2000 Advanced Server, Windows 2000 Datacenter Server,
and AMI product range are either registered trademarks or trademarks of
Microsoft Corporation.
Other product or company names mentioned herein may be the trademarks of
their respective owners.
CONTENTS MEMORY TECHNOLOGY BASICS ...................................................... 1
ROM 1
PROM (Programmable ROM) 1
EPROM (Erasable Programmable ROM) 1
EEPROM (Electrically Erasable Programmable ROM) 1
RAM 1
SRAM (Static RAM) 1
DRAM (Dynamic RAM) 2
DRAM TYPES.................................................................................... 3
Asynchronous DRAM model s 3
FPM (Fast Page Mode) 3
EDO (Extended Data Output) 3
BEDO (Burst Extended Data Output) 3
SDRAM (Synchronous DRAM) Models 4
About SDRAM Speed Ratings 4
DRDRAM (Direct Rambus) 5
SLDRAM (Synchronous Link) 5
DDR SDRAM (Double Data Rate Synchronous) 5
CONCLUSION..................................................................................10
ROM
As the name implies, ROM memory can only be read in operation, preventing
the re-writing of contents as part of its normal function. Basic ROM stores
critical information in computers and other digital devices, information whose
integrity is vital to system operation and is unlikely to change. Other commonly
used forms of ROM are:
RAM
The Random component of RAM’s name actually refers to this type of
memory’s ability to access any single byte of information without contacting or
affecting neighboring bytes. RAM plays a major role in system operations, and
specifically performance. Essentially, the more complex a program is, the more
its execution will benefit from the present of both ample and efficient RAM
access.
RAM takes two forms, SRAM (Static RAM), and DRAM (Dynamic RAM). A
detailed discussion of both follows.
The designation Async is short for Asynchronous, meaning here that the SRAM
functions with no dependence on the system clock. Async SRAM is an older
type of SRAM, normally limited to L2 cache only.
Sync SRAM
Most common, this type of SRAM directs larger data packets of data (requests)
to the memory (pipelining) all at the same time, allowing a much quicker
reaction on the part of the RAM, thus increasing access speed. This type of
SRAM accommodates bus speeds in excess of 66MHz, hence its popularity.
Dynamic RAM derives its name form its requirement for constant refreshing.
While the raw speed consideration renders it a secondary choice to SRAM, its
superior cost advantages make it a vastly more popular choice for system
memory.
A more detailed look at the various types of DRAM follows.
An improvement on its Page Mode Access predecessor, FPM quickly became the
most widely utilized access mode for DRAM, almost universally installed for a
time, and still widely supported. Its bypassing of power-hungry sense and
restore current is a major benefit. With speeds originally limited to around 120
ns, and later improvements to as low as 60, FPM was still unable to keep up
with the soon-to-be ubiquitous 66 MHz system bus. Presently, FPM is rarely
used, having been usurped by several superior technologies. Ironically, its rarity
of deployment has resulted in it often being more costly than other forms.
Its advantage over FPM lies in its ability, by not turning off the output buffers,
to allow an access operation to commence prior to the completion of the
previous operation. This provides a performance improvement over FPM DRAM,
with no increase in silicon usage, and hence, package size.
Users whose bus speed requirements are no higher than 83 MHz should see no
clear advantage in upgrading from EDO to synchronous embodiments.
Nonetheless, the rise in widely available higher chipset capabilities has, for the
most part, left EDO in the past alongside FPM DRAM.
The timing of its development was, however, poor, in that by the time it
became a possible alternative, most large manufacturers had devoted most of
their development energies towards SDRAM and related advancement.
Consequently, industry wisdom, led by Intel, negated the viability of BEDO as
an acceptable solution, irrespective of its possible merits.
Unfortunately, exactly matching exact SDRAM ratings with bus speeds, that is,
for example, to say that a 100 MHz/10 ns-rated SDRAM speed will operate
properly with a 100 MHz system speed, is not practically correct. This stems
from the fact that the SDRAM MHz rating refers to optimum conditions in the
system, a scenario that rarely, if ever, occurs in real-world operations. More
operable would be to “over-match” the SDRAM rating to compensate. Thus, a
much more realistic qualification is undertaken by manufacturers, in which
100MHz SDRAM is intended for operation with a system speed of, say, 83 MHz.
To make the entire process easier, Intel implemented a Speed Rating system for
SDRAM qualification, its now universally exercised “PC” rating. The
qualification, intended as an aid to both manufacturers and users, takes into
account the above noted considerations in addition to other internal timing
characteristics. As much as any across-the-board rating system, this scale comes
very close to assuring the compatibility of SDRAM with its host system speed.
According to the Intel system, a PC100-compliant SDRAM module will perform
well with a 100 MHz system. Likewise its PC66 and PC133 ratings. The ratings
also guarantee compatibility with Intel protocols, such as PC100 with Intel BX
motherboard.
Rambus is the name for a complete interface design developed (by, not
surprisingly, Rambus Inc.) with the full endorsement of Intel as a replacement
for DDR SDRAM.
Finally, and perhaps most significant, is the fact that, unlike any other currently
known solutions, Rambus technology is wholly proprietary. The requirement
for manufacturers implementing the technology to pay licensing fees to Intel
and Rambus is a serious stroke against the technology, as is the fact that the
fees are passed along to the user in increased unit costs. Despite Intel’s (to say
the least) wholehearted support of the solution, analysts remain uncommitted
to its potential for wide acceptance.
Originally a very strong contender against DRDRAM, Synchronous Link had the
distinct advantage, first and foremost, of being an open industry standard,
obviating the various difficulties inherent with Rambus’ proprietary nature.
Presently accepted as the de facto industry standard for system memory, DDR
DDR SDRAM carries two basic advantages in operating technology over regular
SDRAM.
First, as its name indicates, DDR SDRAM uses improved signaling technology
(using Stub Series Terminated Logic, or SSTL, rather than SDRAM’s TTL/LVTTL)
to activate output on both edges of the system clock, rising and falling. This
effectively doubles the operating bandwidth with the same clock frequency.
The doubling of clock speeds is somewhat deceptive, since the presence of the
system cache can mask its actual performance. While the DDR architecture does
drive the data bus at the higher speeds, the command bus may be unable to
keep up. Additionally, deployment with a (relatively) slower bus speed such as
66 MHz will limit the observable advantage. Hence, it is only in the presence of
more powerful system bus speeds, 100 or 133 MHz, (quickly growing in
popularity and use) that the true performance difference of DDR SDRAM can
be appreciated.
The second key recommendation for DDR SDRAM memory is its use of a Delay-
Locked Loop (DLL), which provides a DataStrobe signal with the validation of
data on the pins. The signal is used once for every 16 output operations,
providing a more precise location of data as well as a sharper synchronization
of incoming messages. This increased acuity also allows the synchronized
unification of unlike memory modules.
The following section details more of DDR SDRAM’s features and benefits.
SDRAM DDR
Profile
Speed Rating
JEDEC, the semiconductor engineering standardization body of the Electronic
Industries Alliance (EIA) recommends the following rating system for SDRAM
and DDR speeds.
The peak performance for data transfer is 200 MHz, based on a 100 MHz bus
speed, multiplied by DDR’s doubling capabilities. As mentioned previously,
however, the command bus is limited to single clock tick transactions.
Chips are, of course, designated by native clock speeds, so 200MHz DDR SDRAM
chips are designated as DDR200, and 266MHz DDR SDRAM chips as DDR266.
DDR DIMMs use peak bandwidth (the most data deliverable per second) as a
designator. The formula can be seen in the following example using a 266MHz
DDR DIMM:
(module width) 8 bytes X (rated speed) 266 MHz/sec = 2128 MB/sec, or approx.
2.1 GB/sec.
Thus, the module would usually be rated as a PC2100.
Accordingly, a PC1600 DIMM would use chips rated at 200 MHz.
Unbuffered DIMMs
Unbuffered modules allow unchecked throughput, and are designed for lower
Registered modules carry a higher price tag than the Unbuffered alternative, as
they provide a significantly enhanced level of data overload protection.
All modules installed must be registered. The system will not allow mixing of
Registered and Unbuffered modules.
The designation of this DIMM type comes from its application of registers to
signals. These registers, by applying a one-cycle delay of all data delivered to
the module, boost the clock signal over the complete RAM array, so there is no
deterioration of the signal over the complete path. The result is a much clearer
signal being delivered to the system. This process amplifies the bus signals,
allowing fewer transactions with the same result, supporting 32 or more chips.
Thus, the threat of overloaded signal paths is avoided.
This assurance makes Registered DDR SDRAM the memory model of choice for
any data-critical application.
Data Integrity
Original RAM modules (especially those of lower quality) presented users with
a serious potential for error, since the individual bits could often return
erroneous information only occasionally, making identification of the source
extremely difficult. The most basic solution to this was to create a parity-based
correction system to RAM modules. This measure added an extra bit to each of
the 8-bit sections used to store each byte of data, with the extra bit used to
build parity. The resulting 9-bit segments function normally, but the built-on
parity bit, when supported, performs parity checking of errors.
Some drawbacks inherent in parity solutions for RAM include its inability to
actually correct errors, failure to detect even-bit errors (since the two errors
cancel each other), the tendency to generate shutdown when encountering
errors, and extra cost.
Parity RAM is rarely seen in the present marketplace, since advances in RAM
manufacture have resulted in a generally higher level of quality. Thus, PC users
will, in most cases, get adequate error protection with newer, higher quality
non-parity RAM modules.
With ECC DRAM, readily identifiable by the presence of 9 chips rather than 8,
the check and correct functions take place not on the module, but rather on
the system board. The memory module simply provides the necessary space for
the storage of the information supplied by the process.
ECC-equipped DDR SDRAM has become a nearly universal solution for server-
based systems.
This consideration supports both cost concerns, and optimum preparedness for
future developments.