Why does Dell put slower memory in their systems ?

Status
Not open for further replies.
good link (xbit). but there are a few variables that could swing the verdict other way - the applications used, the cpu architecture and the overclocking.

As the WinRAR case shows, there is a 15-16% gain in moving from PC2-6400(DDR2-800) to PC2-8500(DDR2-1066), when comparing the same timings -

winrar.png


That is not 'insignificant' especially on machines that work hard like developer stations and small servers. and this is just for one step lower. The difference between two or more steps is dramatic. Sure winrar is the worst case scenario among all the other benchmarks, but I wonder what other such applications are there that they may not have been considered that would have benefited greatly from higher bandwidth memory.

Also, did you see their comment about how the Intel Core 2 Duo architecture is handicapped for use with memory above PC2-4200(533mhz) ? From page 4 -

"As you can see, memory types with different theoretical bandwidths do not differ much in practice. For example, there is a 100% difference in theoretical bandwidth between DDR2-533 and DDR2-1067 whereas the difference between the practical results obtained with those memory types is 17% at maximum.

This poor performance of fast DDR2 SDRAM is due to the architecture of Core 2 Duo systems in which memory is connected to the CPU via the chipset and two sequential buses. In this design it is not the bandwidth of dual-channel high-frequency memory that becomes the bottleneck, but the Quad Pumped Bus that connects the CPU with the chipset's North Bridge. Its maximum theoretical bandwidth is 8.5GB/s in Core 2 Duo systems, which only equals the bandwidth of dual-channel DDR2-533 SDRAM. That's why we don't see a really big performance growth if we use memory faster than DDR2-533.

It seems it doesn't make any sense to use memory faster than DDR2-533 on the Core 2 Duo platform. This is not quite so. Memory access latency decreases along with frequency, which can be seen in practical tests.

Its maximum theoretical bandwidth is 8.5GB/s in Core 2 Duo systems, which only equals the bandwidth of dual-channel DDR2-533 SDRAM. That's why we don't see a really big performance growth if we use memory faster than DDR2-533.

Here, the results differ much more. Like the bandwidth, the latency should have a considerable effect on system performance in many applications and may justify the use of high-frequency memory in a computer with a Core 2 Duo processor.

"

Don't know for sure how their AMD counterparts perform in this regard specifically.
 
Last edited:
Comparing apple to apple would be a machine that DELL ship with 667 instead of 800, or 800 instead of 1066 memory in the same latency (the slower one).

1562 vs 1429 is not a lot, only 9.3%, and this is one of the higher difference application that you are usually not running all day. Compression and decompression works with new data all the time in a stream, so cache doesn't help much and it is usually limited by the I/O system (Disk drive) way more than memory. You won't see 9.3% improvement in real life, and that 9.3% difference on a 20 second task that you run once every day will not make much of a difference in the long run.

Like I say before, the performance difference between low end and high end CPU are like 100% or more, what is a little 9.3% when this same money can be made into even higher performance by a slightly faster CPU (from celeron to Pentium E, or from Pentium E to a Core2Duo, etc)? Benchmarks usually have shown that people can't notice anything less than 5%, and minor difference in less than 10%. That means 5% variation is NOISE.

Developer station and engineering work station have different work profile. We need a lot of memory or a lot of processing power, depends on the workload, but rarely do we need a high volume of data running through the compiler at the same time. Most of the stuff a compiler run is loading a bunch of files then parse, tokenize, swap in prebuild code, then in the end linking them by swapping in address pointers. None of these require a huge amount of memory traffic that would go outside the L3 cache (see 2MB-6MB).

Compiler and Engineering work station is no longer the most intensive task for computers, small server is usually limited by the IO (hard drives), and people usually use old low performance computer to run file servers. If you need high performance you will most likely be looking at a different class of machines to begin with, not those low/mid range DELL and whine about them not giving you DDR2 800 and give you DDR2 533 instead.

I'm not saying that there is no difference in performance, but the performance difference is greatly exaggerated. Look at the rest of the benchmark on xbit, the most intensive tasks like Cinebench and mpeg encoding, they runs with ALMOST THE SAME PERFORMANCE. Then the games, they also runs with ALMOST THE SAME PERFORMANCE. The bottleneck is the video card and CPU, high speed memory is usually the bottleneck if you overclock and your CPU is multiplier locked, then you have no choice but to use higher speed ram so the CPU speed can go up even higher.
 
Status
Not open for further replies.
Back
Top