Demanding users usually choose to spend a little more when buying memory modules, purchasing models with lower latencies (also referred to as memory timings) that, at least in theory, offer a higher performance. Nevertheless, can those memories bring real-world advantage? We ran some tests with programs and games using two different memory timings to answer this question. Check it out!
If you are not familiar with the meaning of RAM timings, it is important to read our “Understanding RAM Timings” tutorial, which explains the subject in detail.
In short, memory latencies or timings represent the number of clock cycles that memory waits to deliver some data. The different values (CL, tRCD, tRP, tRAS, and CR) represent the waiting times in specific situations, like row and column changes (since data is organized in memory as a matrix) or between different commands.
We decided to use DDR3 memories running at 1,600 MHz, because it is one of the most common configurations nowadays. In order to make the comparison, we first configured the memory on the motherboard setup with timings 9-9-9-24-1T, which is a typical value found on high-end DDR3-1600 memories (there are special models with even lower latencies, however) and then with 11-11-11-30-2T timings, which are values usually found on low-cost DDR3-1600 memory modules.
For each configuration, we ran some benchmark software (PCMark, 3DMark, and Cinebench R15), converted one video (using Media Espresso), and ran three recent games that, by what we saw on recent tests, demand great overall machine performance, not only relying on the VGA performance (Dirt Rally, Dying Light, and GTA V).
Figures 1 and 2 show the memory configuration on both tests, checked using CPU-Z.
Figure 1: using typical timings of a high-end memory
Figure 2: using typical timings of a value memory
We will list the configuration we used on our tests on the next page.