[nextpage title=”Introduction”]
Demanding users often choose to spend a little more bucks when buying memory modules, purchasing models with lower timings (or latencies) that, theoretically, can offer the best performance. But can those memories make the integrated GPU achieve a higher performance? Let’s check!
Recently, we tested if memory timings affect real-world performance, in a system using a dedicated video card. And we remembered that, while dual-channel memory does not bring a significant performance increase when using a dedicated video card, when we tested the impact of a dual-channel configuration with integrated GPU, the result was very different. Therefore, we decided to check if memory timings could affect the performance of an A10-7870K CPU while using the integrated video.
If you are not familiar with the meaning of RAM timings, it is important to read our “Understanding RAM Timings” tutorial, which explains the subject in detail.
In short, memory latencies or timings represent the number of clock cycles that memory waits to deliver some data. The different values (CL, tRCD, tRP, tRAS, and CR) represent the waiting times in specific situations, like row and column changes (since data is organized in memory as a matrix) or between different commands.
We decided to use DDR3 memories running at 1,600 MHz, because it is one of the most common configurations nowadays. In order to make the comparison, we first configured the memory on the motherboard setup with timings 9-9-9-24-1T, which is a typical value found on high-end DDR3-1600 memories (there are special models with even lower latencies, however) and then with 11-11-11-30-2T timings, which are values usually found on low-cost DDR3-1600 memory modules.
For each configuration, we ran 3DMark, which has some 3D performance benchmarks, and ran some recent games, always using the integrated GPU.
Figure 1 shows the memory configuration on both tests, checked using CPU-Z.
Figure 1: latencies used in our tests
We will list the configuration we used on our tests on the next page.
[nextpage title=”How We Tested”]
During our benchmarking sessions, we used the configuration listed below. Between our benchmarking sessions, the only differences were the memory timings.
Hardware Configuration
- CPU: A10-7870K
- Motherboard: ASRock FM2A88X Extreme6+
- CPU Cooler: AMD stock
- Memory: 16 GiB DDR3-2133, four G.Skill Ripjaws F3-17000CL9Q-16GBZH 4 GiB memory modules configured at 1,600 MHz
- Boot drive: Kingston HyperX Savage 480 GB
- Video Card: integrated Radeon R7
- Video Monitor: Philips 236VL
- Power Supply: Corsair CX500M
Operating System Configuration
- Windows 10 Home 64-bit
- NTFS
- Video resolution: 1920 x 1080 60 Hz
Driver Versions
- AMD driver version: 15.12
Software Used
Error Margin
We adopted a 3% error margin. Thus, differences below 3% cannot be considered relevant. In other words, products with a performance difference below 3% should be considered as having similar performance.
[nextpage title=”3DMark”]
3DMark is a program with a set of several 3D benchmarks. Sky Diver measures DirectX 11 performance, and is aimed on average computers. The Cloud Gate benchmark measures DirectX 10 performance. The Ice Storm measures DirectX 9 performance and is targeted to entry-level computers.
We ran each benchmark three times, and each score is the average between the three collected results.
On Sky Diver, the performance with lower latencies was 6% higher than with higher timings.
On Cloud Gate, the performance gain with the lower timings was of 4%.
On Ice Storm Extreme, the performance gain was also 4%.
[nextpage title=”Gaming Performance”]
Battlefield 4
Battlefield 4 is one of the most popular games of the Battlefield franchise, being released in 2013. It is based on the Frostbite 3 engine, which is DirectX 11. In order to measure performance using this game, we walked our way through the first mission, measuring the number of frames per second (fps) three times using FRAPS. We ran this game at Full HD, setting overall image quality at “medium.”
The results below are expressed in fps and they are the mean between the three collected results.
On Battlefield 4, there was a performance improving of about 4% due to the lower latencies.
Dirt Rally
Dirt Rally is an off-road racing game released in April 2015, using Ego engine. To measure performance using this game, we ran the performance test included in the game, in Full HD resolution, image quality configured as “medium” and MSAA off.
The results below are expressed in frames per second (fps).
In this game, the performance was 10% higher with lower latencies.
Dying Light
Dying Light is an open-world horror game launched in January 2015, using the Chrome Engine 6. We tested the performance at this game with quality options at the minimum and 1280 x 720 resolution, measuring three times the frame rate using FRAPS.
The results below are expressed in fps and they are the mean between the three collected results.
On Dying Light, the performance with lower latencies was about 6% superior to the obtained with higher latencies.
Grand Theft Auto V
Grand Theft Auto V (GTA V) is an open-world action game released for PCs in April of 2015, using the RAGE engine. In order to measure the performance on this game, we ran the performance test of the game (the part the camera follows the plane), measuring the frame rate with FRAPS. We ran GTA V at 1280 x 720, with image quality set to the minimum.
The results below are expressed in frames per second.
On GTA V there was also a performance improvement, of about 5%.
Mad Max
Mad Max is an open-world action game launched in September of 2015, using the Avalanche engine. In order to measure the performance using this game, we ran its intro, measuring the framerate with FRAPS three times. We ran the game at Full HD, with image quality set as “normal”.
The results below are expressed in fps and they are the mean between the three collected results.
On Mad Max, the results were the same on both tests.
[nextpage title=”Conclusions”]
When we tested if the memory timings interfered on computer performance, using a high-end video card, we concluded that there was no practical performance change between using high latency or low latency memories.
However, when we tested the impact of dual-channel memory configuration on gaming performance using a high-end video card, we also did not notice any improvement; but while using integrated video, the result was very different, and we noticed a fair performance improvement.
Because of this, we decided to test if the memory timings could impact the integrated GPU performance, and the result was clear: yes, there is a performance improvement. However, it is actually small, about 5% in most cases.
So, if you are building a computer with integrated video (specifically using an AMD APU, which was the processor we used), and you have the option to buy memory modules with lower latencies for a small difference in price, go ahead. However, if the difference of price between “normal” and low latency memories if significant, forget about it: there are another parts where the investment brings a bigger performance improvement.
Leave a Reply