You are on page 1of 1

It didn't take long after the introduction of SDRAM for hardware developers and regular users to determine that

even this route had its limitations. The origin al SDRAM operated via a single data rate (or SDR) interface that, in spite of th e type's overall advances compared with DRAM, could still only accept one comman d per clock cycle. As computers were becoming more popular and more complicated, and thus issuing more complex requests to the memory more frequently, this was slowing down performance. Around 2000, a new interface method was developed. Called double data rate (or D DR), it let the memory transfer data on both the rising and falling edges of the clock signal, giving it the capability to move information nearly twice as quic kly as with regular SDR SDRAM. There was another side benefit to this change as well: It meant memory could run at a lower clock rate (100-200MHz), using less e nergy (2.5 volts), and achieve faster speeds (transfer rates of up to 400 MTps). As technology progressed and processors became still more powerful and demanding , DDR alone became insufficient. It was followed, in 2003, by DDR2, which refine d the idea even further with an internal clock running at half the speed of the data bus; this meant it was about twice as fast as the original DDR (200-533MHz, with transfer rates up to 1,066MTps), but again used less power (1.8 volts). Na turally, DDR3 was next out of the gate (it debuted around 2007), with its intern al clock cut in half again, its speed about twice that of DDR2 (400-1,066MHz, fo r a maximum transfer rate of 2,133MTps), and power usage reduced even more over its predecessor (to 1.5 volts). (You may have already surmised the next logical step in memory technology. Indee d, DDR4 is already in development, and will probably begin appearing in consumer products around 2014, with wider adoption to follow gradually.It's expected to offer transfer rates of up to 4,266MTps, with voltage ranging from 1.05 to 1.2 v olts.)

You might also like