You are on page 1of 15


Today almost everyone knows what is RAM. But I suspect how many of them are knowing the history of RAM, the various types of RAMs right from the older ones. The purpose of my seminar on CHANGING TRENDS IN RAMS is to make aware of various types of random access memories right from the older ones to the latest technologies. Also to compare them and recognizing their advantages and dis-advantages. And also to give some details about each of them.

1. RAM: RAM stands for Random Access Memory. Unlike ROM , RAM is a read-write memory.Actually it should be as RWM, because calling it as random access may imply that ROM does not allows random access ,which is not true. We call it as random access because all previous read-write memories were sequential& not allowing random access. RAM is a volatile type of memory,its contents are Lost when the power is turn off. RAM holds the programs and data that you are currently working on.

2. BASIC CLASSIFICATION OF RAM: RAM is basically divided into 2 types 1) Static RAM 2) Dynamic RAM

3. SRAM: SRAM stands for static RAM. SRAM holds its data as long as the power is supplied without external refresh, that is why it is called as static. 3.1 Basic cell structure: The basic cell of SRAM is composed of either 4 transistors or 6 transistors. The 4T cell contains 2 additional resistors while 6T cell contains no resistors. Generally a 6T cell is used to construct SRAMs So to represent one bit you require 6 transistors .So you get less memory per chip. 3.2 Advantages: -

SRAM is faster than DRAM. The reason why it is faster is that it doesnt require refreshing.

3.3 Dis-advantages: SRAM is byte per byte expensive than DRAM. This is because to constitute one memory bit of SRAM 6 transistors are required. SRAM takes much space than DRAM 3.4 Usages: Ideally you want the memory of your PC to be fast, large & inexpensive. But it is not possible to satisfy all the 3 criteria simultaneously. Now, since SRAM fast but it is very expensive and it is not compact. Therefore it is not used as main memory of computer; instead it is used as cache memory. Cache memory needs to be small and fast thats why SRAM is suitable for it.

4. DRAM: DRAM stands for dynamic RAM. It is called dynamic because this type of memory needs to be refreshed many a times per second. The reason why the refreshing is needed lies in the basic cell structure of DRAM. The basic cell of DRAM is composed of 1 transistor & 1 capacitor. DRAM stores data as charge on the capacitor. The problem with capacitor is that they hold the charge for a short duration of time. And the capacitors used in the chip are so tiny that they discharges very quickly. This is why refreshing is needed. In refreshing we read the contents of every cell & write them back before they are lost. This refreshing operation needs to be performed 100s of times a second. 4.1 Access to DRAM: DRAM memory can be thought of as a matrix or a table of cells, which is accessed, by specifying row and column address. In a typical DRAM access row address is specified first followed by column address specification. 4.2 Advantages: DRAM is economically much cheaper than SRAM. This is because to compose 1 cell of DRAM only 1 transistor and 1 capacitor as compared to 6 transistors in SRAM. DRAM is compact than SRAM.

are required

4.3 Dis-advantage: DRAM is much slower than SRAM. This is because the DRAM needs to be refreshed many a times/sec.

4.4 Usage: Seeing the advantages, DRAM is used as the main memory of computer. 4.5 Classification of DRAM: 5. FPMDRAM: FPM stands for Fast Page Mode. In FPMDRAM, once a memory bit is addressed by row and column selection, the row address of that bit is latched into special latch; and the next column in the same row is selected assuming that the data needed by that program resides adjacent to previous location. FPMDRAM supports bus speed up to 66Mhz. FPMDRAM has access time of 60ns. FPMDRAM typically allows burst system timings as fast as 5-3-3-3 at 66 MHz. 5.1 Burst: Burst means after 1st memory location put by row and column selection, the next 3 addresses are generated internally by eliminating time necessary time to input new column address. Now, the Burst timing of 5-3-3-3 means FPMDRAM needs 5 clock cycles to locate 1st memory bit and 3 cycles each for next 3 addresses. 5.2 Advantages: Significant improvement in access time as compared to conventional DRAM. 5.3 Dis-advantages: FPMDRAM is not suitable for all operations. This means the assumption made by FPMDRAM, that the data needed by that program resides adjacent to previous location may be false at a X times; and where it is so, there is not any advantage of using FPMDRAM. FPMDRAM is not suitable for memory bus speeds over 66Mhz. 5.4 Current position: Today FPMDRAM is not used; this is because of continuous Improvement in RAM technology.

6. EDODRAM: EDO stands for Extended Data Out. It is also called hyper page mode DRAM.

It is the last major improvement in the asynchronous DRAMs. It is slightly faster than FPM memory due to another evolutionary tweak in how the memory access works. In simplified terms, EDO memory has had its timing circuits modified so one access to the memory can begin before the last one has finished. EDODRAM does not wait for complete processing of a bit; as soon as the address of the 1st bit is located it looks for next bit. It is therefore slightly faster than FPM memory, giving a performance boost of around 3-5% over FPM in most systems. Theoretically EDO DRAM can only be used on a bus speed of up 66 MHz. But it has been shown the EDO can be used with bus speeds of up to 83.3 MHz. EDODRAM typically allows burst system timings as fast as 5-2-2-2 at 66 MHz, when using an optimized chipset. I.e. 3 clock cycles less for each burst as compared to FPMDRAM, thereby improving access time from 60ns to 45ns. EDO uses the same amount of silicon and the same package size. EDO memory requires support from the system chipset. Invented in 1994, most newer Pentium systems, as well as some of the latest PCI-based 486 motherboards will support EDO. 6.1 Advantages: Improvement in access time from 60ns of FPM to 45ns. It is cheaper than FPMDRAM. Almost all motherboards supports EDODRAM. 6.2 Dis-advantages: EDODRAM is not suitable for memory bus speeds over 83Mhz. 6.3 Current position: Today EDODRAM is rarely used because of arrival of SDRAM. 7. BEDO DRAM: BEDO stands for Burst EDO. The sentence used w.r.t. BEDO is a good idea was dead before it ever was born. In this case, EDO memory is combined with pipelining technology and special latches to allow for much faster access time than regular EDO. BEDO allows system timings of 5-1-1-1 when used with a supporting chipset. BEDODRAM supports bus speed up to 100Mhz. In spite of supporting bus speeds of 100Mhz. and providing better system burst timing BEDODRAM has totally failed in market. The reason for this is that, unfortunately for BEDO, Intel decided that EDO was no longer viable, and SDRAM was their preferred memory architecture so they did not implement support of BEDO into their chipsets.

8. SDRAM: -

SDRAM stands for Synchronous DRAM. It is the technology that had completely eliminated the existence of EDODRAM from market. SDRAM has the ability to synchronize the operations with processor clock. It is designed to be able to read or write from memory in burst mode at speed of 1 clock cycle per access i.e. virtually imposing no wait states. SDRAM allows system timings of 5-1-1-1 at 66Mhz. 8.1 PC100 SDRAM: Initially SDRAM modules were designed to support bus speeds up to 100Mhz. But there are some delays introduced when having to deal with the various synchronization of signals. Therefore these modules doesnt operate well at 100Mhz. instead they work fine up to speed of 83MHz. When Intel decided to officially implement a 100MHz system bus speed, they understood that most of the SDRAM modules available at that time would not operate properly above 83MHz. (Page No: 9 of 24) In order to bring some semblance of order to the marketplace, Intel introduced the PC100 specification as a guideline to manufacturers for building modules that would function properly at 100Mhz. PC100 running at 100Mhz. has given the performance boost of 10% PC100 reduced the access time from 45ns of EDODRAM to just 10ns. 8.2 PC133 SDRAM: So PC100 SDRAM sounds good and it was the best RAM available at that time But technology never stops improving for anybody and sooner the 133Mhz. bus was introduced. The PC100 SDRAM has the theoretical design limitation of 125Mhz. So Intel introduced a new specification called PC133 SDRAM for memory bus speed of 133Mhz. The PC133 SDRAM running at 133Mhz. has reduced the access time from 10ns of PC100 to 7.5ns. 8.3 characteristics and concerns regarding SDRAMs: There are several important characteristics and concerns regarding SDRAMs that are relatively unique to the technology 8.3.1 Speed and Speed Matching: SDRAM modules are generally speed-rated in two different ways: First, they have a "nanosecond" rating like conventional asynchronous DRAMs, so SDRAMs are sometimes referred to as being "12 nanosecond" or "10 nanosecond". Second, they have an "MHz" rating, so they are called "83 MHz" or "100 MHz" SDRAMs for example. Because SDRAMs are, well, synchronous, they must be fast enough for the system in which they are being used. With asynchronous DRAMs such as EDO or FPM, it was common to add

extra wait states to the access timing for the memory to compensate for memory that was too slow. With SDRAM however, the whole point of the technology is to be able to run with zero wait states. In order to do this, the memory must be fast enough for the bus speed of the system. One place where people run into trouble in this regard is that they take the reciprocal of the "nanosecond" rating of the module and conclude that the module can run at that speed. For example, the reciprocal of 10 ns is 100 MHz, so people assume that 10 ns modules will definitely be able to run on a 100 MHz system. The problem is that this allows absolutely no room for slack. In practice, you really want memory rated slightly higher than what is required, so 10 ns modules are really intended for 83 MHz operations. 100 MHz systems require faster memory, which is why the PC100 specification was developed (Page No: 10 of 24) 8.3.2 Latency: SDRAMs are still DRAMs, and therefore still have latency. The fast 12, 10 and 8 nanosecond numbers that everyone talks about refer only to the second, third and fourth accesses in a fouraccess burst. The first access is still a relatively slow 5 cycles, just as it is for conventional EDO and FPM memory 8.3.3 2-Clock and 4-Clock Circuitry: There are two slight variations in the composition of SDRAM modules; these are commonly called 2-clock and 4-clock SDRAMs. They are almost exactly the same, and they use the same DRAM chips, but they differ in how they are laid out and accessed. A 2-clock SDRAM is structured so that each clock signal controls 2 different DRAM chips on the module, while a 4-clock SDRAM has clock signals that can control 4 different chips each. You need to make sure that you get the right kind for your motherboard. The current trend appears to be toward 4-clock SDRAMs. 8.3.4 Packaging Concerns: To make matters even more confusing, SDRAM usually comes in DIMM packaging, which itself comes in several different formats 8.4 Advantages: The major advantage of SDRAM is that it is able to perform the operations in synchronization with processor clock. It improved the access time from 45ns of EDODRAM to 7.5ns of PC133 SDRAM. 8.5 Some flaws: After reading so much able to virtues of SDRAM (see the entry above), you're probably wondering why anyone would want anything but SDRAM. FPM RAM can only get access speeds down to 60 ns, EDO DRAM can only get to 45 ns, but SDRAM has gotten the time it takes your processor to access a memory address down to only 10 ns! And SDRAM almost eliminates those pesky wait states (see above). By synchronizing itself to your CPU, SDRAM can interface with your processor at ever clock cycle! It can even support bus speeds higher than any type of EDO DRAM. SDRAM sounds incredible.

If you've seen benchmarks on RAM though, you might have noticed something. SDRAM only gives you about a 5-10% performance increase over EDO DRAM. There are two main reasons why SDRAM fails to perform. If you've read above, you know at least a little of EEPROM. This type of ROM is absent from the SDRAM DIMMs. Used in RAM, EEPROM helps the memory function properly. Without, SDRAM has taken a performance hit. (Page No: 11 of 24) The second reason that SDRAM just doesn't perform is that it does not have proper chipset support. Like many other types of RAM, SDRAM needs the support of the chipset to function at its best. Since the chipsets used today are optimized for EDO and not SDRAM, SDRAM once again is put at a disadvantage. Well, it looks like SDRAM isn't destined to take over the market like it should have

9. DDRSDRAM: DDR stands for Double Data Rate It is also called as SDRAM-II. Only a few years ago, "regular" SDRAM was introduced as a proposed replacement for the older FPM and EDO asynchronous DRAM technologies. This was due to the limitations the older memory has when working with systems using higher bus speeds (over 83 MHz). In the next couple of years, as system bus speeds increase further, the bell will soon toll on SDRAM itself. One of the proposed new standards to replace SDRAM is Double Data Rate SDRAM or DDR SDRAM. There are several competing new standards on the horizon that are very promising, however most of them require special pinouts, smaller bus widths, or other design considerations. In the short term, Double Data Rate SDRAM looks very appealing. Though PC133 SDRAM supports memory bus speed of 133Mhz; it is not quite sufficient to accommodate with increasing bandwidth of future processors. Therefore the new technology in RAM is come up called DDR. 9.1 Basic Principle of operation: The basic principle of working of DDRSDRAM is that it transfers data on both falling as well as rising edge of processor clock. While the traditional SDRAM modules transfers data only on the rising edge of the processor clock. So the DDR SDRAM design cans effectively double the speed of operation up to at least 200MHz. DDR SDRAM is currently the most popular and most widely used technology. Almost all motherboards available today support the DDR SDRAM. 9.2 Advantages: DDR SDRAM has doubled the speed of operation as compared to SDRAM.

DDR SDRAM works fine up to memory bus speed of 266Mhz.

10. RDRAM: RDRAM stands for RAMBUS DRAM. It is developed by RAMBUS INC. 10.1 NEED OF RDRAM: Now after seeing about DDRSDRAM, you will probably think that why they are developing another standard. The answer to this question is that due to arrival of GHZ CPUs and ever growing demand for todays software, SDRAM seem to have run into bandwidth limitation. Now let us calculate the theoretical bandwidth of SDRAM SDRAM uses 8-byte wide data bus. PC100 SDRAM: 100 MHz x 8 Bytes = 800 MB/s = 0.8 GB/s PC133 SDRAM: 133 MHz x 8 Bytes = 1064 MB/s = 1.064 GB/s This shows that SDRAM is simply insufficient to handle the up coming GHZ CPUs. The DDRSDRAM might promise the increased memory bandwidth, but it will run into several timing, latency, and propagation delay problems due to wide data bus and ever increasing clock speed. So RAMBUS had come up with the completely different architecture. They introduced a high-speed 16-bit bus running at clock rate of 400 MHZ. DRDRAM works more like an internal bus than a conventional memory subsystem. It is based around what is called the Direct Rambus Channel, a high-speed 16-bit bus running at a clock rate of 400 MHz. This is an entirely different approach to the way memory is currently accessed over a wide 64bit memory bus. It may seem counterproductive to narrow the channel since that reduces bandwidth, however the channel is then capable of running at much higher speeds than would be possible if the bus were wide. Since RDRAM accomplishes the data transfer on both rising as well as falling edges of the clock it creates an effective of 800MHZ memory rating. But by reducing the channel width, they are able to impliment a very high bus speed of 400MHZ. (Page No: 13 of 24) Now let us calculate the bandwidth for RDRAM RDRAM: 800 MHz x 2 Bytes = 1600 MB/s = 1.6 GB/s This shows that RDRAM is quite capable of handling the GHZ CPUs. 10.2 RDRAM vs. SDRAM: -

RDRAM promises a bandwidth twice that of PC100; well, true to some extent, but only valid if comparing PC800 RDRAM with PC100 SDRAM. PC800? PC100? Confusing to say the least, as that would suggest that PC800 is 8X the speed of PC100. Upon closer examination, RDRAM uses a 2 byte (16 bit) wide data bus versus SDRAM's 8byte (64 bit) wide data bus. Furthermore, the PC800 rating is a bit confusing, as PC800 RDRAM is actually a doublepumped module operating at 400 MHz clock speed. Double-pumped simply means data is transferred to the RDRAM on both the rising and falling edge of the clock, often referred to as double data rate (DDR), creating an effective 800 MHz memory rating. PC100 SDRAM is referred to as single data rate (SDR) and operates at 100 MHz clock speed; it can only transfer data on the rising edge of the clock, thus having an effective 100 MHz memory rating. If we were to compare theoretical bandwidth without taking memory latency into consideration, we end up with the following: PC800 RDRAM: 800 MHz x 2 Bytes = 1600 MB/s = 1.6 GB/s PC100 SDRAM: 100 MHz x 8 Bytes = 800 MB/s = 0.8 GB/s However, both the VIA chipset and 440BX support 133 MHz, or PC133, memory, though unofficially for the 440BX. If we look at the theoretical bandwidth of PC133 memory we end up with the following: PC133 SDRAM: 133 MHz x 8 Bytes = 1064 MB/s = 1.064 GB/s These numbers seem to offer performance above what we have seen in real world benchmarks. That is quite true, as these are theoretical numbers not taking the memory's latency into account, which makes a world of difference. It is, however, a shame to see that these idealized numbers are used to promote one architecture's superiority over another. As stated, theoretical bandwidth cannot be used alone to measure memory architecture superiority. Memory latency imposes too much of a penalty on actual memory bandwidth, and is different for every architecture. Therefore to be able to really determine architectural superiority these latencies must be accounted for. To start off with RDRAM, it is memory architecture with a packet-based protocol, with the access latency depending on how far away from the memory controller that it resides. Although systems with multiple RDRAMs have slightly increased latencies compared to single-RDRAM systems, RDRAM latency is still somewhat comparable to that of SDRAM systems. However, the RDRAM protocol and architecture facilitates memory concurrency and minimizes latency compared to SDRAM memory systems when multiple memory references are being serviced simultaneously. The number of RDRAMs does not affect peak bandwidth, and an RDRAM-based memory system provides peak bandwidth twice that of PC100 SDRAM. The 1.6 GB/sec bandwidth of RDRAM is achieved with only a 16-bit data bus, and when combined with control signals the memory controller only needs about one third of I/O channels that SDRAM does. SDRAM uses a different approach; it is a parallel data bus, 64 bits wide, and adding modules to the system has no effect on memory latency. In addition to the 64-bit data bus, the memory

controller must drive a multiplexed row and column address to the SDRAMs along with control signals.

10.3 RDRAM vs. SDRAM Performance: SDRAM performance is actually measured with two metrics: bandwidth and latency. Surprisingly RDRAM does not only offer a higher bandwidth, but its latency has also been improved relative to SDRAM. What may be even more surprising is that PC133 SDRAM latency is worse than PC100 SDRAM. How is component latency defined? The accepted definition of latency is the time between the moments the RAS (Row Address Strobe) is activated (ACT command sampled) to the moment the first data bit becomes valid. Synchronous device timing is always a multiple of the device clock period. The fundamental latency of a DRAM is determined by the speed of the memory core. All SDRAMs use the same memory core technology; so all SDRAMs are subject to the same latency. Any differences in latency between SDRAM types are therefore only the result of the differences in the speed of their interfaces. At the 400 MHz data bus, the interface to a RDRAM operates with an extremely fine timing granularity of 1.25ns, resulting in a component latency of 38.75ns. (Page No: 15of 24) The PC100 SDRAM interface runs with a coarse timing granularity of 10ns. Its interface timing matches the memory core timing very well, so that its component latency ends up being 40ns. The PC133 SDRAM interface, with its coarse timing granularity of 7.5ns, incurs a mismatch with the timing of the memory core that increases the component latency significantly, to 45ns. The latency timing values can be computed easily from the device data sheets. For the PC100 and PC133 SDRAMs, the component latency is the sum of the tRCD and CL values. The RDRAM's component latency is the sum of the tRCD and TCAC values, plus one halfclock period for the data to become valid. Although component latency is an important factor in system perf ormance, system latency is even more important, since it is system latency that reduces overall performance. System latency is determined by adding external address and data delays to the component latency. For PCs, the system latency is measured as the time to return 32-bytes of data, also referred to as the 'cache line fill' data, to the CPU. In a system, SDRAMs suffer from what is known as the two-cycle addressing problem. The address must be driven for two clock cycles (20ns at 100 MHz) in order to provide time for the signals to settle on the SDRAM's highly loaded address bus. After the two-cycle address delay and the component delay, three more clocks are required to return the 32 bytes of data. The system latency of PC100 and PC133 SDRAM add five clocks to the component latency. The total SDRAM system latency is:

40 + (2 x 10) + (3 x 10) = 90ns for PC100 SDRAM 45 + (2 x 7.5) + (3 x 7.5) = 82.5ns for PC133 SDRAM The superior electrical characteristics of a RDRAM eliminate the two-cycle addressing problem, requiring only 10ns to drive the address to the RDRAM. The 32 bytes of data are transferred back to the CPU at 1.6 GB/second, which works out to be 18.75ns. Adding in the component latency, the RDRAM system latency is: 38.75 + 10 + 18.75 = 67.5ns for PC800 RDRAM Measured at either the component or system level, RDRAMs have the fastest latency. Surprisingly, due to the mismatch between its interface and core timing, the PC133 SDRAM latency is significantly higher than the PC100 SDRAM. The RDRAM's low latency coupled with its 1.6 gigabyte per second bandwidth provides the highest possible sustained system performance. From a performance point of view we must note that L1 and L2 cache hits and misses contribute greatly to memory architecture performance. Also, individual programs vary in memory use and so have different impacts on its performance. For example, a program that uses random database search using a large chunk of memory will 'thrash' the caches, and the memory architecture having the lowest latency will have the advantage. On the other hand, large sequential memory transfers with little requirement for CPU processing can easily saturate SDRAM bandwidth. RDRAM will have an advantage here with its higher bandwidth. For code that fits nicely within the L1/L2 caches, memory type will have virtually no impact at all. 10.4 Advantages: RDRAM has reduced the access time to 1.25ns. RDRAM can operate with 400MHZ of bus speed. 10.5 Dis-advantages: RDRAM is more expensive than other RAMs. Not all motherboards support RDRAM. 10.6 THE FUTURE MEMORY: RDRAM is called as the future memory. RDRAM is not perfect, but it is currently one of the most promising solutions to bandwidth, latency and propagation delay problems, and is scalable, a distinct advantage. It is expensive, but that's partly because it's new and the market has not caught on yet. Once more manufacturers start selling RDRAM and it becomes as commonplace as SDRAM now is, we will see its prices dropping, too. Due to the nature of the manufacturing process it will probably never be as affordable as SDRAM, but then again SDRAM doesn't offer the same performance, which is what you're actually paying for (Page No: 17 of 24) And since Intel has chosen to implement support for the RDRAM memory architecture in its upcoming chipsets, the future of the RDRAM is bright.
Written by Administrator

Article Index




12. VIDEO RAM: After seeing all RAM types from FPMDRAM to RDRAM, let us discuss now a different type of RAM called VRAM. VRAM stands for Video Ram. Video boards, especially frame buffer versions, use regular DRAMs as their video memory. Fault lines in the video memory constitution are more prominent at the high resolutions. Video boards with general type DRAM may not work faster to manipulate data, ensuring an acceptable flicker-free image on the screen at the same time. Releasing large amount of video information coupled with non-stop refresh of the DRAM memory contents is a time consuming affair. With more pixels and colors, the physical limitations of the memory suggest the only option as either to cut down the pixel resolution or color depth. At the higher depths, DRAM is limited in its ability to act as a frame buffer. Video RAM is a specialty video memory to ease speed-resolution barrier existing in video processing when using ordinary DRAM. Modern video adapters use their own, specialized RAM that is separated from the main system memory called VRAM. Video RAM, also known as multiport dynamic random access memory (MPDRAM), is a type of RAM used specifically for video adapters or 3-D accelerators. The "multiport" part comes from the fact that VRAM normally has both random access memory and serial access memory. VRAM is located on the graphics card and comes in a variety of formats, many of which are proprietary. The amount of VRAM is a determining factor in the resolution and color depth of the display. VRAM is also used to hold graphics-specific information such as 3-D geometry data and texture maps.

The demands placed on video memory are far greater than those placed on system memory. 12.1 DUEL PORTING: Video RAM uses special technique called duel porting. (Page No: 19 of 24) In addition to the video image being accessed and changed by the processor on a continual basis (many times a second when you are running a game for instance), the video card also must access the memory contents between 50 and 100 times per second to display the information on the monitor. Video cards have therefore spawned the creation of several new, innovative memory technologies, many of them designed to allow the memory to be accessed by the processor and read by the video card's refresh circuitry simultaneously. This is called dual porting and is found on Video RAM or VRAM memory. Cards using this type of memory are faster and more expensive than ones using FPM or EDO DRAM. 12.2 Advantages: VRAM allows more resolution and color depth. It is due to VRAM that we are able to play the games like Quake-3. 12.3 Dis-advantages: VRAM is much expensive that ordinary DRAMS. 12.4 OTHER VIDEO DRAM TECHNOLOGIES: In addition to VRAM, there are several other memory technologies that have evolved to maximize the performance of the video card. They are: 1. EDOVRAM (Extended Data Out Video Random Access Memory) 2. 3DRAM (3 Dimensional Random Access Memory) 3. WRAM (Window Random Access Memory) 4. SGRAM (Synchronous Graphics Random Access Memory).

13) How Much Do You Need? It's said that you can never have enough money and the same seems to hold true for RAM, especially if you do a lot of graphics-intensive work or gaming. Next to the CPU itself, RAM is the most important factor in computer performance. If you don't have enough, adding RAM can make more of a difference than getting a new CPU! If your system responds slowly or accesses the hard drive constantly, then you need to add more RAM. If you are running Windows 95/98, you need a bare minimum of 32 MB, and your computer

will work much better with 64 MB. Windows NT/2000 needs at least 64 MB, and it will take everything you can throw at it, so you'll probably want 128 MB or more. Linux works happily on a system with only 4 MB of RAM. If you plan to add X-Windows or do much serious work, however, you'll probably want 64 MB. Apple Mac OS based systems will work with 16 MB, but you should probably have a minimum of 32 MB. The amount of RAM listed for each system above is estimated for normal usage -- accessing the Internet, word processing, standard home/office applications and light entertainment. If you do computer-aided design (CAD), 3-D modeling/animation or heavy data processing or if you are a serious gamer, then you will most likely need more RAM. You may also need more RAM if your computer acts as a server of some sort (Web pages, database, application, FTP or network). Another question is how much VRAM you want on your video card. Almost all cards that you can buy today have at least 8 MB of RAM. This is normally enough to operate in a typical office environment. You should probably invest in a 32-MB graphics card if you want to do any of the following: Play realistic games Capture and edit video Create 3-D graphics Work in a high-resolution, full-color environment Design full-color illustrations When shopping for video cards, remember that your monitor and computer must be capable of supporting the card you choose. 14) CONCLUSION In past few years we have seen we have seen an incredible change in memory technologies, right from FPMDRAM with 60ns access time and supporting memory bus speed of 66MHZ to the latest RDRAM with 1.25ns access time and supporting memory bus speed of 400MHZ. And I am quite sure that we will see better and better RAMDOM ACCESS MEMORY types in the near future. RAMBUS had already hipped the atmosphere by announcing to launch the 64GB/S RDRAM in the coming months.