DDR5? Yes, we barely met DDR4



In January 2020, CES introduced SK Hynix DDR5 memory at maximum speed. According to rumors, Micron and other manufacturers are testing similar devices. While they can not get through the usual channels, however, since there are no motherboards for them yet, this is not a problem. As far as we know, among the first motherboards that can take advantage of the new technology will be Intel's Xeon Sapphire Rapids. However, the question arises: what kind of technology is this?

SDRAM Basics


In general, for a system that requires RAM, there are two main competing options: static and dynamic memory. There are new technologies, for example, FeRAM and MRAM, but the classic choice is between static and dynamic. Static RAM is a bunch of switches, one per bit. Set up and forget. And then they read it. And she can work very quickly. The problem is that usually at least four transistors, and often six, go to such a switch, so a limited number of them can be shoved into a certain area. Power consumption is often too high, although modern devices can do a good job of this.

So while static memory is popular with single-board computers and small devices, a PC or server will not be able to accommodate gigabytes of static memory. Dynamic memory uses a small capacitor to store each bit. To connect a capacitor to a common bus, you still need a transistor, but you can pack them tightly. Unfortunately, there is a big problem: capacitors discharge quite quickly. It is necessary to develop some way to periodically update the memory, or it will forget. For example, a typical DDR4 module needs to be updated every 64 ms.

Real devices use row and column capacitors to maximize space and the ability to update a whole series at a time. This means that a device of 4096 rows needs to be updated every 15.6 ms so that each row retains its data. The update itself takes only a few nanoseconds.



A typical array has a bus for rows and columns. The capacitor connects to the FET, which can connect and disconnect it from the column bus. The FET valve is connected to the line bus. The line signal selects the entire FET line. The long column bus has its own capacitance and resistance, so it takes some time to pre-charge the signal to stabilize, after which the multiplexer reads a bit from the desired column. Recording takes place in reverse order. If you want, you can play around with the memory simulator in the browser.

This is how dynamic memory, or DRAM, works. What about SDRAM? SDRAM is a dynamic memory with a synchronous interface with a memory controller. The controller allows you to collect several commands at once and processes the entire logic of working with rows and columns, and even knows how to automatically update the memory. The controller buffers both commands and data, which increases throughput compared to many other technologies.

Story


The history of SDRAM began in 1992, and by the year 2000, it had superseded almost all other varieties of DRAM from the market. The JEDEC industrial group standardized the interface for SDRAM in 1993, so there are usually no problems using memory from different manufacturers.

Normal SDRAM can receive one command and transmit one data word per cycle. Over time, JEDEC defined the standard for double data rate, or DDR. He still takes one command per cycle, but writes or reads two words in a single beat. He knows how to do this, transmitting one word on the rising edge of the clock signal, and the other on the falling edge. In practice, this means that inside, on one command, he reads two words, which allows the internal timer to work slower than I / O. So if the I / O clock frequency is 200 MHz, then the internal timer can operate at 100 MHz, and while transmitting data, it will still transmit two words per I / O clock.



All this worked so well that in the end they invented the DDR2 standard, reorganizing the memory so that inside it worked with four words, and then sent or received four words at once. Of course, the clock frequency did not change, so the delay increased. DDR3 again doubled its internal data size, increasing latency accordingly.

DDR4 took a different path. He did not double the internal memory bus, but made intermittent access to internal memory banks to increase throughput. Reducing the voltage also allows you to increase the clock frequency. DDR4 appeared in 2012, although it gained critical mass only in 2015.

Have a sense of growing memory bandwidth? Well, practically. The increase in throughput roughly coincided with the increase in the number of cores in processors. So, although the net throughput was growing, throughput per one core on a typical machine has not changed for quite some time. In fact, given the rapid increase in the number of cores on a typical CPU, its average value decreases. So it's time for a new standard.

DDR5


And now we have DDR5, defined in 2017. Judging by the reports, the throughput of the DDR5-3200 SDRAM will be 1.36 more than the DDR4-3200, and maybe even more. We also hear that the prefetch size will double again, at least optionally.

A typeThroughputVoltagePrefetchYear
SDR1.6 GB / s3.311993
DDR3.2 GB / s2.522000
DDR28.5 GB / s1.842003
DDR38.5 GB / s1.882007
DDR425.6 GB / s1.282017
DDR532 GB / s1.18/162019


As can be seen from the table, over 26 years, throughput compared to the original SDR memory has grown 20 times. Not bad. Prefetching 16 words looks especially interesting since it will allow the chip to populate a typical PC cache at a time.

There are other benefits. For example, if you have ever tried to connect SDRAM to your own circuit or FPGA, you will like the loopback mode. If you really like large amounts of memory, then the maximum memory capacity will now be 64 GB.

By the way, there is also the LP-DDR5 specification for the low-power memory option for devices such as smartphones. This specification was released last year, and so far we do not see a big race in the production of such products. LP-DDR4 allows you to choose from two frequency options so that you can sacrifice speed for energy consumption. LP-DDR5 has three different tuning options. And there are also GDDR standards - already before GDDR6 - for processing graphics and other high-speed applications. In the long term, LP-DDR5 will be able to work with a bandwidth of 6.4 Gb / s per bit I / O, and GGDR6 can boast hundreds of GB / s depending on the width of the word.

And now what?


Unless you have a loaded server or something else that fully loads all the cores of your CPU, you will not feel much of a difference between DDR4 and DDR5. But, again, who doesn’t like good results in speed tests?

In addition, from the point of view of a typical workstation, the main focus is to have enough RAM to not access the disk too often. Especially if you have a disk with rotating plates, notorious for their low speed. Time to write and read RAM is not such a significant factor in real work. With SSDs, the situation is not as bad as before, but the bandwidth of a typical SSD is only slightly higher than that of DDR3, although faster drives are looming on the horizon. So, unless you are busy with a very heavy load of multiple cores, you'd better have 32 GB of DDR3 than 4 GB of DDR5, since more memory will save you time on slower operations.

All Articles