You are on page 1of 19

RAM???

BY:
AHMAD KHAIRI HALIS

FOR:
ASSIGNMENT 1 – QUESTION 2

BKE 5413

INTRODUCTION
Similar to a microprocessor, a memory chip is an integrated circuit (IC) made of millions of transistors
and capacitors. In the most common form of computer memory, a transistor and a capacitor are paired
to create a memory cell, which represents a single bit of data. The capacitor holds the bit of information
-- a 0 or a 1. The transistor acts as a switch that lets the control circuitry on the memory chip read the
capacitor or changes its state.

A capacitor is like a small bucket that is able to store electrons. To store a 1 in the memory cell, the
bucket is filled with electrons. To store a 0, it is emptied. The problem with the capacitor's bucket is
that it has a leak. In a matter of a few milliseconds a full bucket becomes empty. Therefore, for
dynamic memory to work, either the CPU or the memory controller has to come along and recharge all
of the capacitors holding a 1 before they discharge. To do this, the memory controller reads the
memory and then writes it right back. This refresh operation happens automatically thousands of times
per second.

This refresh operation is where dynamic RAM gets its name. Dynamic RAM has to be dynamically
refreshed all of the time or it forgets what it is holding. The downside of all of this refreshing is that it
takes time and slows down the memory.

Memory cells are etched onto a silicon wafer in an array of columns (bitlines) and rows (wordlines).
The intersection of a bitline and wordline constitutes the address of the memory cell.

DRAM works by sending a charge through the appropriate column (CAS) to activate the transistor at
each bit in the column. When writing, the row lines contain the state the capacitor should take on.
When reading, the sense-amplifier determines the level of charge in the capacitor. If it is more than 50
percent, it reads it as a 1; otherwise it reads it as a 0. The counter tracks the refresh sequence based on
which rows have been accessed in what order. The length of time necessary to do all this is so short
that it is expressed in nanoseconds (billionths of a second). A memory chip rating of 70ns means that it
takes 70 nanoseconds to completely read and recharge each cell.

Memory cells alone would be worthless without some way to get information in and out of them. So
the memory cells have a whole support infrastructure of other specialized circuits. These circuits
perform functions such as:
- Identifying each row and column (row address select and column address select)
- Keeping track of the refresh sequence (counter)
- Reading and restoring the signal from a cell (sense amplifier)
- Telling a cell whether it should take a charge or not (write enable)
- Other functions of the memory controller include a series of tasks that include identifying the type,
speed and amount of memory and checking for errors.

Although memory is technically any form of electronic storage, it is used most often to identify fast,
temporary forms of storage. If your computer's CPU had to constantly access the hard drive to retrieve
every piece of data it needs, it would operate very slowly. When the information is kept in memory, the
CPU can access it much more quickly. Most forms of memory are intended to store data temporarily.
As you can see in the diagram above, the CPU accesses memory according to a distinct hierarchy.
Whether it comes from permanent storage (the hard drive) or input (the keyboard), most data goes in
random access memory (RAM) first. The CPU then stores pieces of data it will need to access, often in
a cache, and maintains certain special instructions in the register. We'll talk about cache and registers
later.

Clock speed Time per clock tick


20 MHz 50 ns
25 MHz 40 ns
33 MHz 30 ns
50 MHz 20 ns
66 MHz 15 ns
100 MHz 10 ns
133 MHz 6 ns

RAM type Max. peak bandwidth


FPM 176 MB/sec
EDO 264 MB/sec
SD 528 MB/sec
HISTORY

In the 80s, PCs were equipped with RAM in quantities of 64 KB, 256 KB, 512 KB and 1 MB.
In 1985, Intel's 386 processor had a 32-bit address bus, enabling it to access up to 4GB of memory.

Around 1990, advanced operating systems, like Windows , appeared on the market started the RAM
race. The PC needed more RAM. That worked fine with the 386 processor, which could address larger
amount of RAM. The first Windows operated PCs could address 2 MB RAM, but 4 MB soon became
the standard. The race has continued through the 90s, as RAM prices have dropped dramatically.
DRAM is used widely over SRAM. The Pentium processor - introduced in 1993 - increased the data
bus width to 64-bits, enabling it to access 8 bytes of data at a time.

By the late 1990s, most desktop computers were using 168-pin DIMMs, which supported 64-bit data
paths. PC users have benefited from an extremely stable period in the evolution of memory
architecture. Since the poorly organised transition from FPM to EDO there has been a gradual and
orderly transition to Synchronous DRAM technology. However, the future looks considerably less
certain, with several possible parallel scenarios for the next generation of memory devices.

In early 1998, that the benefit of 100MHz page cycle time was fully exploited. That make the
beginning of popularity of SDRAM. With the approach of 1999 there was a significant level of support
for a couple of transitionary memory technologies.

In early 1999, a number of non-Intel chipset makers decided to release chipsets that supported the
faster PC133 SDRAM.

DDR-DRAM first broke into the mainstream PC arena in late 1999, when it emerged as the memory
technology of choice on graphics cards using nVidia's GeForce 256 3D graphics chip. Lack of support
from Intel delayed its acceptance as a main memory technology. Indeed, when it did begin to be used
as PC main memory, it was no thanks to Intel. This was late in 2000 when AMD rounded off what had
been an excellent year for the company by introducing DDR-DRAM to the Socket A motherboard.
While Intel appeared happy for the Pentium III to remain stuck in the world of PC133 SDRAM and
expensive RDRAM, rival chipset maker VIA wasn't, coming to the rescue with the DDR-DRAM
supporting Pro266 chipset.

At the beginning of 2000, NEC begun sampling 128MB and 256MB SDRAM memory modules
utilising the company's unique performance-enhancing Virtual Channel Memory (VCM) technology,
first announced in 1997. Fabricated with an advanced 0.18-micron process and optimised circuit layout
and compliant with the PC133 SDRAM standard, VCM SDRAMs achieve high-speed operation with a
read latency of 2 at 133MHz (7.5ns) and are package-and pin-compatible with standard SDRAMs.
Continuing delays with Rambus memory as well as problems with its associated chipsets finally saw
Intel bow to the inevitable in mid-2000 with the release of its 815/815E chipsets - its first to provide
support for PC133 SDRAM.

By early 2001, DDR-DRAM's prospects had taken a major turn for the better, with Intel at last being
forced to contradict its long-standing and avowed backing for RDRAM by announcing a chipset -
codenamed "Brookdale" - that would be the company's first to support the DDR-DRAM memory
technology. The i845 chipset duly arrived in mid-2001, although it was not before the beginning of
2002 that system builders would be allowed to couple it with DDR SDRAM.

In late 2003 - DDR400 came out chips to deliver 3200MBps of bandwidth and a new DDR2

TYPES OF RAMS

STATIC RANDOM ACCESS MEMORY


SRAM does not need as much electricity as DRAM for the constant replacing of the memory addresses
and goes at a fast rate due that fact that it is not constantly replacing the instructions and values stored
inside.

SRAM can give access times as low as 10 ns. In addition, its cycle time is much shorter than that of
DRAM because it does not need to pause between accesses. Unfortunately, it is also much more
expensive to produce than DRAM. Due to its high cost, SRAM is often used only as a memory cache.

Basically an SRAM is an array of flip-flops which is addressable. The array can be configured as such
that the data comes out in single bit, 4-bit, 8bit, etc... format. SRAM technology is volatile just like the
flip-flop, its basic memory cell. SRAM is often found on microcontroller boards, either on-or off the
cpu-chip because for such applications the amount of memory required is small and it would not pay
off to build the extra interface circuitry for dram's. Another application where often SRAM memory is
found is for cache applications, because of the SRAM high memory speed

SRAM comes in many speed classes, ranging from some ns for fast cache applications to 200 ns for
low power applications. Please note that SRAM exists in both bipolar and MOS technology. 95% of all
applications use CMOS technology for the highest density and for the lowest power consumption. Fast
cache memory can be constructed of BiCMOS technology, which is a hybrid technology that uses
bipolar transistors for extra drive and speed. The fastest SRAM memories are available in ECL
(emitter coupled logic) bipolar technology. Because of the high power consumption the memory size is
limited in this technology.

A special case of SRAM memory is Content Addressable Memory (CAM). In this technology the
memory consists of an array of flip-flops, in which each row is connected to a data comparator. The
memory is addressed by presenting data to it and not an address. All comparators will then check
simultaneously if their corresponding RAM register holds the same data. The CAM will respond with
the address of the row (register) corresponding to the original data. The main application for this
technology is fast lookup tables. These are often used in network routers.

DYNAMIC RANDOM ACCESS MEMORY

The memory addresses in DRAM need to be refreshed many times each second. This causes a greater
amount of electricity to be used by your RAM and also slows it down, because the RAM addresses are
constantly being refreshed.

The word 'dynamic' indicates that the data is not held in a flip-flop but rather in a storage cell. The ugly
thing about a storage cell is that is it leaks. Because of that, data must be read out and re-written each
time before the data is lost. This refresh time interval is usually 4 to 64 ms. The good thing about a
storage cell is that it requires only one capacitor and one transistor, whereas a flip-flop connected in an
array requires 6 transistors. In trench capacitor memory technology, which is used in all modern
DRAM's, the cell access transistor is constructed above the capacitor so the space on chip is ultimately
minimized. For this reason DRAM technology has a lower cost per bit than sram technology. The
disadvantage of the extra circuitry required for refreshing is easily offset by the lower price per bit
when using large memory sizes. Therefor the working memory of almost all computing equipment
consists of DRAM memory.

DRAM memory is, just like SRAM memory constructed as an array of memory cells. A major lies in
the addressing technique. DRAM addressing a row of data without rewriting it will destruct all data in
the row because of the dynamic nature.

With a DRAM, the memory array is explicitely divided into rows and columns. From outside the
memory chip, a row is chosen first. Internally this row is is selected as a whole and brought to the sense
amplifiers on chip. Each column in a DRAM has its own sense amplifier. After the row has been
selected, the column address is presented. The column adress can represent a single bit or a group of
bits. In case of a data read, the data of that particular column (group) is presented to the output of the
chip. After that all columns are rewritten automatically to the row line which was addressed. This is
required because reading the row data actually destroys the original data. In case of a write, the selected
column (group) bits are changed, while the other column bits remain unchanged. The whole row is also
written back, with only the selected bits changed. Because in the addressing scheme of a DRAM the
row must be addressed before the column is addressed in the memory matrix, most manufacturers have
take the opportunity to reduce the number of pins on the chip by multiplexing the address data. Since
row and column address are presented at a different time interval, they can easily be placed on the same
multiplexed address bus. The drawback is more complex address driving circuitry. However, for large
memories this is a relatively small drawback. The major problem with the sequential addressing is the
lower speed. A second factor which causes the DRAM speed to be lower than SRAM speed that when
reading the array the row cells must be sensed before data can be presented to the output. The memory
cell in a dram is actually a capacitor that returns an analog voltage, whereas in an sram the memory cell
is a flip-lop wich can actively drive the bit line.

To speed up dram access time, several techniques have been incorporated. Most of those technologies
rely on the fact that data is most likely to be read or written in the same row as the previous data.

Over the years, several different structures have been used to create the memory cells on a chip, and in
today's technologies the support circuitry generally includes:
- sense amplifiers to amplify the signal or charge detected on a memory cell
- address logic to select rows and columns
- Row Address Select (/RAS) and Column Address Select (/CAS) logic to latch and resolve the row and
column addresses
- and to initiate and terminate read and write operations
- read and write circuitry to store information in the memory's cells or read that which is stored there
- internal counters or registers to keep track of the refresh sequence, or to initiate refresh cycles as
needed
- Output Enable logic to prevent data from appearing at the outputs unless specifically desired.

A transistor is effectively a switch which can control the flow of current - either on, or off. In DRAM,
each transistor holds a single bit: if the transistor is "open", and the current can flow, that's a 1; if it's
closed, it's a 0. A capacitor is used to hold the charge, but it soon escapes, losing the data. To overcome
this problem, other circuitry refreshes the memory, reading the value before it disappears completely,
and writing back a pristine version. This refreshing action is why the memory is called dynamic. The
refresh speed is expressed in nanoseconds (ns) and it is this figure that represents the "speed" of the
RAM. Most Pentium-based PCs use 60 or 70ns RAM.

The process of refreshing actually interrupts/slows down the accessing of the data but clever cache
design minimises this. However, as processor speeds passed the 200MHz mark, no amount of cacheing
could compensate for the inherent slowness of DRAM and other, faster memory technologies have
largely superseded it.

FAST PAGE MODE DRAM

FPM DRAM waits through the entire process of locating a bit of data by column and row and then
reading the bit before it starts on the next bit. Maximum transfer rate to L2 cache is approximately 176
MBps.

All types of memory are addressed as an array of rows and columns, and individual bits are stored in
each cell of the array. With standard DRAM or FPM DRAM, which comes with access times of 70ns
or 60ns, the memory management unit reads data by first activating the appropriate row of the array,
activating the correct column, validating the data and transferring the data back to the system. The
column is then deactivated, which introduces an unwanted wait state where the processor has to wait
for the memory to finish the transfer. The output data buffer is then turned off, ready for the next
memory access.

At best, with this scheme FPM can achieve a burst rate timing as fast as 5-3-3-3 and 6-3-3-3 . This
means that reading the first element of data takes five clock cycles, containing four wait-states, with the
next three elements each taking three.

This is accomplished by holding the ras line so the dram latches the row address while the peripheral
circuitry presents only new CAS addresses. This technology creates an effective speed boot of nearly
twofold in most computer systems because most programs are executed sequentially, and also most
data is read out sequentially (eg : strings) . When data is read is a DRAM, there is fat chance the next
data required by the system is simply an address close to the previous one, which is very likely to be in
the same row.

Note : the word 'page' denotes a row in the dram. Fast page mode means that data can be accessed fast
within one page, or row. FPM memory can be recognized by the IC number ending in '00' with most
manufacturers.

EXTENDED DATA OUTPUT RAM

Unlike regular DRAM which can allow access to only one byte (one instruction or one value) of
information at a time, EDO allows an entire block of memory to be moved into the internal cache for
quicker access by the CPU. Theoretically, EDO DRAM can only be used on a bus speed of up 66
MHz. But many have shown the EDO can be used with bus speeds of up to 83.3 MHz. As a side note,
EDO DRAM is only effective if the PC supports pipeline bursting.

EDO DRAM does not wait for all of the processing of the first bit before continuing to the next one.
As soon as the address of the first bit is located, EDO DRAM begins looking for the next bit. It is
about 5% faster than FPM. Maximum transfer rate to L2 cache is approximately 264 MBps.

EDO memory comes in 70ns, 60ns and 50ns speeds. 60ns is the slowest that should be used in a
66MHz bus speed system (i.e. Pentium 100MHz and above) and the Triton HX and VX chipsets can
also take advantage of the 50ns version. EDO DRAM doesn't demand that the column be deactivated
and the output buffer turned off before the next data transfer starts. It therefore achieves a typical burst
timing of 5-2-2-2 at a bus speed of 66MHz and can complete some memory reads a theoretical 27%
faster than FPM DRAM.
BURST EDO DRAM

BEDO DRAM is a special type of EDO DRAM that transfers the information from the RAM memory
addresses to the processor at every clock cycle. But, as the name suggests, the RAM is not able to hold
up this kind of transfer rate for too long, forcing it to make bursts. These bursts are short, since the
RAM simply can't go as fast as your CPU. BEDO DRAM also fails to keep up with bus speeds of over
66 MHz, the theoretical barrier of EDO DRAM anyway. Since RAM generally doesn't keep up with the
speed of the chip, the chip has to slow down. These periods are called wait states. The bursts from
BEDO DRAM help produce as fewer of these periods.

BEDO DRAM contains a pipeline stage and a 2-bit burst counter. With the conventional DRAMs such
as FPM and EDO, the initiator accesses DRAM through a memory controller. The controller must wait
for the data to become ready before sending it to the initiator. BEDO eliminates the wait states thus
improving system performance by up to 100% over FPM DRAM and up to 50% over standard EDO
DRAM, achieving system timings of 4-1-1-1 to 5-1-1-1 when used with a supporting chipset.

Despite the fact that BEDO arguably provides more improvement over EDO than EDO does over
FPM the standard has lacked chipset support and has consequently never really caught on, losing out to
Synchronous DRAM (SDRAM).
SYNCHRONOUS DRAM

SDRAM takes advantage of the burst mode concept to greatly improve performance. It does this by
staying on the row containing the requested bit and moving rapidly through the columns, reading each
bit as it goes. The idea is that most of the time the data needed by the CPU will be in sequence.
SDRAM is about 5% faster than EDO RAM and is the most common form in desktops today.
Maximum transfer rate to L2 cache is approximately 528 MBps.

All the previous types are asynchronous memory. In an asynchronous dram, the addressing and data
read/write is sequenced by external circuitry (the chipset) which needs to wait between each sequence
to allow the DRAM to respond to its signals and output/accept the data.

In an SDRAM a clock signal is presented to the dram by the control circuitry (the chipset). This allows
the DRAM to synchronise its operation to that of the control circuits so it knows what signal is coming
exactly when and can also respond with a very precise timing. This approach allows operation at much
higher speeds.

Internally an SDRAM is still an old FPM ram with features added. When an SDRAM is accessed the
first time, the access time is still slow as with an FPM ram. But for the next accesses in the same page,
data can be presented to the output in a fast synchronous fashion (burst mode) with data rates as fast as
133MHz, while the maximum data rate for asynchronous dram is limited to approx 30MHz. A second
advantage of SDRAM technology is that because of the synchronous nature, address data can be
presented ahead, greatly streamlining data flow. Also, most sdram's are constructed with 2 or 4 banks
on chip. This is a novel approach which was not possible in asynchronous DRAM's. The advantage of
having multiple banks is that while one bank is being accessed, the other can be read or written, again
increasing data throughput.

SDRAM's exist in 66,100 and 133 MHz. Currently the 133MHz modules are popular due to
motherboard support. Basically the chips are backward compatible in terms of clock frequency.

An important feature with SDRAM is the CAS latency (CL) . The CAS latency can be 2 or 3. This
figure indicates the number of clocks required to make the first access from a random address. If CL=3
@ 100MHz, then the access time is 30ns. Please note that a memory chip of 100MHz @ CL2 (20ns)
can be faster than a 133MHz version @ CL=3 (22.5ns). The supply voltage is usually 3.3V

PC100 SDRAM on a 100MHz (or faster with 4-1-1-1 timing) system bus will provide a performance
boost for Socket 7 systems of between 10% and 15%, since the L2 cache is running at system bus
speed. Pentium II systems will not see as big a boost, because the L2 cache is running at ½ processor
speed anyway, with the exception of the cacheless Celeron chips of course.
Refer to DESIGNATION section more information about what the means of PC66, PC100, and
PC133.

ENHANCED SDRAM

In order to overcome some of the inherent latency problems with standard DRAM memory modules,
several manufacturers have included a small amount of SRAM directly into the chip, effectively
creating an on-chip cache. One such design that is gaining some attention is ESDRAM from Ramtron
International Corporation. ESDRAM is essentially SDRAM, plus a small amount of SRAM cache
which allows for lower latency times and burst operations up to 200MHz. Just as with external cache
memory, the goal of a cache DRAM is to hold the most frequently used data in the SRAM cache to
minimize accesses to the slower DRAM. One advantage to the on-chip SRAM is that a wider bus can be
used between the SRAM and DRAM, effectively increasing the bandwidth and increasing the speed of
the DRAM even when there is a cache miss.

As with DDR SDRAM, there is currently at least one Socket 7 chipset with support for ESDRAM. The
deciding factor in determining which of these solutions will succeed will likely be the initial cost of the
modules. Current estimates show the cost of ESDRAM at about 4 times that of existing DRAM
solutions, which will likely not go over well with most users.

DOUBLE DATA RATE RAM

DDR is the other competing memory technology battling to provide system builders with a high-
performance alternative to RDRAM. As in standard SDRAM, DDR SDRAM is tied to the system's
FSB, the memory and bus executing instructions at the same time rather than one of them having to
wait for the other.

Traditionally, to synchronise logic devices, data transfers would occur on a clock edge. As a clock
pulse oscillates between 1 and 0, data would be output on either the rising edge (as the pulse changes
from a "0" to a "1") or on the falling edge. DDR DRAM works by allowing the activation of output
operations on the chip to occur on both the rising and falling edge of the clock, thereby providing an
effective doubling of the clock frequency without increasing the actual frequency.
DDR memory chips are commonly referred to by their data transfer rate. This value is calculated by
doubling the bus speed to reflect the double data rate. For example, a DDR266 chip sends and receives
data twice per clock cycle on a 133MHz memory bus. This results in a data transfer rate of 266MT/s
(million transfers per second). Typically, 200MT/s (100MHz bus) DDR
memory chips are called DDR200, 266MT/s (133MHz bus) chips are called
DDR266, 333MT/s (166MHz bus) chips are called DDR333 chips, and
400MT/s (200MHz bus) chips are called DDR400.

A PC1600 memory module (simply the DDR version of PC100 SDRAM


uses DDR200 chips and can deliver bandwidth of 1600MBps. PC2100 (the
DDR version of PC133 SDRAM) uses DDR266 memory chips, resulting in
2100MBps of bandwidth. PC2700 modules use DDR333 chips to deliver
2700MBps of bandwidth and PC3200 - uses DDR400 chips to deliver
3200MBps of bandwidth. Maximum transfer rate to L2 cache is
approximately 1,064 MBps (for DDR SDRAM 133 MHZ). The supply
voltage of DDR is usually 2.5V

DDR memory modules, on the other hand, are named after their peak bandwidth - the maximum
amount of data they can deliver per second - rather than their clock rates. This is calculated by
multiplying the amount of data a module can send at once (called the data path) by the speed the FSB is
able to send it. The data path is measured in bits, and the FSB in MHz.

Refer to DESIGNATION section more information about what the means of PC1600, PC2100, and
PC2700.

DUAL CHANNEL DDR

The terminology "dual-channel DDR" is, in fact, a misnomer. The fact is there's no such thing as
dual-channel DDR memory. What there are, however, are dual-channel platforms.

When properly used, the term "dual channel" refers to a DDR motherboards chipset that's designed
with two memory channels instead of one. The two channels handle memory-processing more
efficiently by utilising the theoretical bandwidth of the two modules, thus reducing system latencies,
the timing delays that inherently occur with one memory module. For example, one controller reads
and writes data while the second controller prepares for the next access, hence, eliminating the reset
and setup delays that occur before one memory module can begin the read/write process all over again.

Consider an analogy in which data is filled into a funnel (memory), which then "channels" the data to
the CPU.
Single-channel memory would feed the data to the processor via a single funnel at a maximum rate of
64 bits at a time. Dual-channel memory, on the other hand, utilises two funnels, thereby having the
capability to deliver data twice as fast, at up to 128 bits at a time. The process works the same way
when data is "emptied" from the processor by reversing the flow of data. A "memory controller" chip is
responsible for handling all data transfers involving the memory modules and the processor. This
controls the flow of data through the funnels, preventing them from being over-filled with data.

It is estimated that a dual-channel memory architecture is capable of increasing bandwidth by as much


as 10%.

The majority of systems supporting dual-channel memory can be configured in either single-channel or
dual-channel memory mode. The fact that a motherboard supports dual-channel DDR memory, does
not guarantee that installed DIMMs will be utilised in dual-channel mode. It is not sufficient to just
plug multiple memory modules into their sockets to get dual-channel memory operation – users need to
follow specific rules when adding memory modules to ensure that they get dual-channel memory
performance. Intel specifies that motherboards should default to single-channel mode in the event of
any of these being violated:

• DIMMs must be installed in pairs


• Both DIMMs must use the same density memory chips
• Both DIMMs must use the same DRAM bus width
• Both DIMMs must be either single-sided or dual-sided.

DDR 2

A new version of DDR RAM using another technique to make performance higher.
RAMBUS DRAM

Designed by Rambus (company that developed the rambus standard), RDRAM uses a Rambus in-line
memory module (RIMM), which is similar in size and pin configuration to a standard DIMM. What
makes RDRAM so different is its use of a special high-speed data bus called the Rambus channel.

Rambus does not make memory chips themselves but do licence the technology to memory
manufacturers. Basically Rambus has defined the interface to the memory module using dedicated
levels (resembling ECL current drive), the command set to access the memory and the architecture of
the memory chips. Internally the memory chips use FPM core memory running at 100MHz. Rambus
memories exist of an array of such memory cores that can be accessed simultaneously, largely
increasing speed. However, as with SDRAM, there is a latency time for the first access at a random
address. This is usually in the order of 45ns. Rambus uses high clock speeds and a narrow data path
(16 or 18 bits). Clock speeds can be up to 400MHz. The Rambus interface is of the DDR type, meaning
that data can be transferred on both clock edges.RDRAM memory chips work in parallel to achieve a
data rate of 800 MHz, or 1,600 MBps.

The RAMBUS-design gives a more intelligent access to the RAM, meaning that units can "prefetch"
data and this way free the CPU some work. The idea is that data is read in small packets at a very high
clock speed. The RIMM modules are only 16 bit wide compared to the traditional 64 bit SDRAM
DIMMs, but they work at a much higher clock frequency. The Rambus modules work on 2.5 volts,
which internally is reduced down to 0.5 volt when possible. This helps to reduce heat and radio signals.
The RIMMs hold controlling chips that turns off the power to sections not in use. They can also reduce
the memory speed if thermal sensors report of overheating.
With densities up to 256Mbit, RDRAM components are available in volume from leading memory
suppliers in a range of speeds from 800MHz to 1200MHz. For systems requiring upgrade flexibility,
RDRAM devices may be configured into single-, dual- or quad-channel RIMM modules to support
bandwidths from 1.6 GB/sec to 10.7 GB/sec and system memory capacities up to 8GB. Both RDRAM
memory devices and RIMM modules are tested for specification compliance with an extensive
validation program that also promotes system interoperability. Additionally, standard RDRAM
memory controllers are available in a wide range of ASIC and foundry processes in a variety of
different advanced geometries from 0.25 to 0.13 m.

There are also many different type of Rambus modules beside Direct RDRAM such as RDRAM 2.0,
Rambus DDR and Rambus XDR.
OTHERS

NIBBLE NODE DRAM


With nibble mode dram data can be accessed in a normal fashion. After the first bit has been read out,
the next three bits can be obtained at the output of the dram by pulsing the cas line. No new address is
required, but the bits must be in the same row address space. The idea behind this technology was to
have burst mode without the requirement for expensive drive circuits. Nibble mode drams are out of
fashion today. They can be recognized by the IC number ending in '01'.

STATIC COLUMN DRAM


The first data is also accessed in a normal fashion by doing the ras / cas cycle. Once the first data is
read or written, this dram can have its column address changed without requiring a new cas strobe
pulse. As such the ram acts like a static ram, but only within one row (page). These ram's wre used by
some computer manufacturers (Atari) in the 1980's and by some video board manufacturers. Currently
they have gotten out of use. They can be recognized by the IC number ending in '02'.

PSRAM
PSRAM stands for Pseuo-static RAM. Basically Psram's are fpm dram's with automatic refresh
circuitry incorporated. As such, no external circuitry is required for dram refresh, and the chip can be
used just like an sram with a few limitations. Psram's have gotten out of use nowaday's.

MDRAM
Multibank Dynamic Random Access Memory, MDRAM is a type of RAM used in video cards. It is
incredibly fast, with transfer as much as a gig per second.

PARITY RAM
Parity and Non-Parity RAM is a classification of your memory modules. If you look on your memory
modules and count the chips, if you find an even number of chips (i.e. 1 x 32), you've got Non-Parity
RAM. If you've got an odd number (i.e. 1 x 36), its Parity RAM. But what's the difference? Well, the
chips on Non-Parity RAM are all memory chips. On Parity RAM, all but one of those chips is a
memory chip. So, Non-Parity and Parity RAM both have the same amount of memory addresses. Well,
then what's that extra chip for? The other chip on the Parity RAM module is a Parity Bit. This chip
checks your flow of data, eliminating errors that would have gone unchecked in Non-Parity RAM.
Parity modules are used primarily in servers that cannot crash. If you have a do-or-die situation, Parity
RAM might be the way to stop those crashes.

PRAM
Parameter RAM. PRAM is used to store settings from the Mac equivalent of Control Panel (whatever
that is), even when the computer is turned off. It has a battery to store the information so it's ready at
boot up.

SGRAM
Synchronous Graphic RAM, SGRAM offers the amazing capabilities of SDRAM, but for your
graphics card. Giving your video the same edge that SDRAM gives your system.

TAG RAM
This is an interesting idea. Tag RAM stores the addresses of any memory in cache. If your CPU finds
the address in Tag RAM is looks in the cache, otherwise its back to the memory modules.
NOVRAM
A novram (non-volatile ram) usually refers to a classic sram which has a lithium battery incorporated
in the IC package. Data retention is usually 5 to 10 years. CMOS sram's have the advantage of draining
very little power when not being accessed. Moreover, it is enough that the supply voltage is kept below
a minium level (usually 1 to 2V) to allow the chip to retain its data.

A second class of novram's consists of a hybrid technology of sram and eeprom. On such a chip each
sram cell has an accompanying eeprom cell. The chip has an input which, when triggered, will force
the chip to copy its sram contents into the eeprom within 10ms. A system using such a technology
needs a watchdog circuit placed before the system voltage regulator that warns the memory when the
supply voltage is about to fail. The power supply needs to be designed so that it will guarantee power
for the little time required for the novram to copy ots data internally. When power is restored, the data
from the eeprom will be copied back again into the sram. This technology has the advantage of having
unlimited read/write cycles at sram speed and still be non-volatile. The drawback is that the technology
is expensive, and therefore finds limited use only.

VRAM
VideoRAM, also known as multiport dynamic random access memory (MPDRAM), is a type of RAM
used specifically for video adapters or 3-D accelerators. The "multiport" part comes from the fact that
VRAM normally has two independent access ports instead of one, allowing the CPU and graphics
processor to access the RAM simultaneously. VRAM is located on the graphics card and comes in a
variety of formats, many of which are proprietary. The amount of VRAM is a determining factor in the
resolution and color depth of the display. VRAM is also used to hold graphics-specific information
such as 3-D geometry data and texture maps. True multiport VRAM tends to be expensive, so today,
many graphics cards use SGRAM (synchronous graphics RAM) instead. Performance is nearly the
same, but SGRAM is cheaper.

WRAM
Window RAM. WRAM allows information to be taken out of RAM at the same time it's being put in.
This makes WRAM even more effective than VRAM.

MAGNETIC RAM
MRAM uses a magnetic charge - similar to a hard drive's - to store information, as opposed to the
electric charge of existing memory forms. The chips work by storing the ones and zeros that compose
digital data in a magnetic material that is sandwiched between two metal layers.

Its proponents claim the new technology combines the best virtues of many of the common forms of
memory - the low cost and high capacity of dynamic RAM; the high speed of static RAM; and the
permanence of Flash memory - and has the potential to significantly improve many electronic products
by storing more information, accessing it faster and using less battery power than conventional
electronic memory.
Since MRAM also retains information when power is turned off, it means that products like personal
computers could start up instantly, without waiting for software to "boot up." Non-volatility also means
reduced power consumption. Since it will not need constant power to keep the data intact, MRAM
could consume much less than established random access memory technologies, extending the battery
life of cell phones, handheld devices, laptops and other battery powered products.

SLDRAM
Many memory manufacturers are putting their support behind SLDRAM as the long-term solution for
system performance. While SLDRAM is a protocol-based design, just as RDRAM is, it is an open-
industry-standard, which requires no royalty payments. This alone should allow for lower cost. Another
cost advantage for the SLDRAM design is that it does not require a redesign of the RAM chips.

Due to the use of packets for address, data and control signals, SLDRAM can operate on a faster bus
than standard SDRAM – up to at least 200MHz. Just as DDR SDRAM operates the output signal at
twice the clock rate, so can SLDRAM. This puts the output operation as high as 400MHz, with some
engineers claiming it can reach 800MHz in the near future. Compared to DRDRAM, it seems that
SLDRAM is a much better solution due to the lower actual clock speed (reducing signal problems),
lower latency timings and lower cost due to the royalty-free design and operation on current bus
designs. It appears that even the bandwidth of SLDRAM is much higher than DRDRAM at 3.2GB/s vs.
1.6GB/s

DESIGNATION

PC133, PC2100, PC1600, RDRAM...


What do all the names mean???

SDRAM
Modules and chips that run on a PC at 66MHz are called PC66, 100MHz is called PC100 and 133MHz
is called PC133.

Please note there is a difference in the timing parameters of a PC and some other systems. Where the
-10ns chips can be clocked at 100MHz (10ns clock time) in a non-Intel specified system, the same
chips can be used at 66MHz only in a PC. PC100 chips are 100MHz chips or 125MHz chips that can
run at 100MHz in a PC.
DDR SDRAM
DDR running at a clock frequency of 100MHz is called PC1600 (200 million transfers/sec x 8 bytes)
DDR running at a clock frequency of 133MHz is called PC2100 (266 million transfers/sec x 8 bytes)
DDR running at a clock frequency of 166MHz is called PC2700 (333 million transfers/sec x 8 bytes)
The next expected standard is PC3200
PC1600 and PC2100 are Jedec standards. PC2400 is not a standard and consists of PC2100 chips that
are over clocked.
PC2700 is not a Jedec standard but is endorsed by Samsung and a significant amount of Taiwanese
motherboard manufacturers.

Please note that the latency is in the same order as that of normal SDRAM and in case the controlling
system is not able to pipeline data the actual access time of DDR is equal to that of SDRAM (PC266
CL2 = 15ns AND PC133 CL2=15ns). The CAS Latency number is not necessarily an integer. The CAS
Latency can be an integer plus half a clock cycle. (eg : CL=2.5).

RDRAM
RDRAM use a bank of four modules. RDRAM transfer 16 bits at a time (16 bits = two 8 bit data
Bytes). SDRAM transfers data in blocks of 64 bits at a time (64 bits = eight 8 bit data Bytes). So when
they jump up and down about the effective speed of 800MHz RDRAM that only means a peak data
transfer rate of 1600 MB/s, that is 800 MHz times 2 Bytes at a time. Rambus modules exist in 300MHz
(600) 350MHz (700) 400MHz (800) and 533MHz (1066) clock speeds

Note that in the same speed grade there are modules with different latency times (eg PC800 is
available in 45ns and 40ns grades)

RAM type Theoretical max. bandwidth


SDRAM 100 MHz 100 MHz X 64 bit= 800 MB/sec
SDRAM 133 MHz 133 MHz X 64 bit= 1064 MB/sec
DDRAM 200 MHz (PC1600) 2 X 100 MHz X 64 bit= 1600 MB/sec
DDRAM 266 MHz (PC2100) 2 X 133 MHz X 64 bit= 2128 MB/sec
DDRAM 366 MHz (PC2600) 2 X 166 MHz X 64 bit= 2656 MB/sec
RDRAM 600 MHz 600 MHz X 16 bit= 1200 MB/sec
RDRAM 700 MHz 700 MHz X 16 bit= 1400 MB/sec
RDRAM 800 MHz 800 MHz X 16 bit= 1600 MB/sec
SYSTEM
Initially, SRAM is not widely used for computer memory but only as a cache. DRAM have a lot
advantages due to it low cost. after Intel intoduce its Pentium, and many motherboard with DIMMs to
support current DRAM. Basically, all design of DRAM is to provide same purpose, it doesn't matter
what the system is. From FPM to BEDO DRAM, it can be used for any system that support them.

SDRAM came out to provide more performance as Operating System for PC getting heavier like
Window 95. Most system support SDRAM technology and people still use it until today doesn't matter
what kind of the user and system it depends (Cyrix, WinChip, Intel, AMD).

Then RDRAM and DDR SDRAM came as high application needed more memory like 3D renderer and
3D games. RDRAM from Rambus cooperate with Intel when Pentium 4 is introduced but Intel quickly
turn into DDR as its competitor AMD introduced Athlon XP which utilized for DDR. RDRAM seems
still to go down due its price and uncompatible system.

DDR2 was intially designed for graphic card but due to high demand for performance, DDR2 will be
on market for people who hunger it.

CLASS
Like system, RAM usually used in general purpose. The usage depend on current technology.
Nowaday, we see that SDRAM is still being used by people in any kind of systems. DDR SDRAM
slightly taking over SDRAM user and these will keep going as dual channel technology and DDR2
introduced. RDRAM that usually used on Pentium 4 system should reconsider their target and the
development of next generation of RDRAM is still going on.

The usage of RAM is depends on the needed of the system. Unlike proccessors and graphic cards, the
user just has to use higher RAM for higher usage or add it when needed. We can see 128MB is enough
for Windows 98 and 512MB is good for Windows XP, but as 64 bits system for home user introduced
by AMD Athlon 64, the better performance are needed since the software development also will take
change to make it from 32 bits into 64 bits.

RESOURCES

AnandTech

HowStuffWorks

KarbosGuide

PCTechGuide

Rambus

Tom's Hardware Guide

www.drix.be/ram.htm

www.PC100.com

You might also like