Demanding users often choose to spend a little more bucks when buying memory modules, purchasing models with lower timings (or latencies) that, theoretically, can offer the best performance. But can those memories make the integrated GPU achieve a higher performance? Let’s check!
Recently, we tested if memory timings affect real-world performance, in a system using a dedicated video card. And we remembered that, while dual-channel memory does not bring a significant performance increase when using a dedicated video card, when we tested the impact of a dual-channel configuration with integrated GPU, the result was very different. Therefore, we decided to check if memory timings could affect the performance of an A10-7870K CPU while using the integrated video.
If you are not familiar with the meaning of RAM timings, it is important to read our “Understanding RAM Timings” tutorial, which explains the subject in detail.
In short, memory latencies or timings represent the number of clock cycles that memory waits to deliver some data. The different values (CL, tRCD, tRP, tRAS, and CR) represent the waiting times in specific situations, like row and column changes (since data is organized in memory as a matrix) or between different commands.
We decided to use DDR3 memories running at 1,600 MHz, because it is one of the most common configurations nowadays. In order to make the comparison, we first configured the memory on the motherboard setup with timings 9-9-9-24-1T, which is a typical value found on high-end DDR3-1600 memories (there are special models with even lower latencies, however) and then with 11-11-11-30-2T timings, which are values usually found on low-cost DDR3-1600 memory modules.
For each configuration, we ran 3DMark, which has some 3D performance benchmarks, and ran some recent games, always using the integrated GPU.
Figure 1 shows the memory configuration on both tests, checked using CPU-Z.
Figure 1: latencies used in our tests
We will list the configuration we used on our tests on the next page.