Memory Scaling, AMD's Trinity-Based APUs, And Game Performance

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

merikafyeah

Honorable
Jun 20, 2012
264
0
10,790



"I don't have a dog in this fight, but 2400MHz DDR3 effectively maxes-out the bandwidth available at 128-bit."

Uhh, source? What exactly are you referring to? The APU bus-size?

"SWING! and a miss."

In your mind I'm sure. But here in the real world, the numbers back me up.

"Strike THREE!"

I'm guessing you just eyeballed the length of the pretty bars with all of them pretty colors, instead of y'know, actually reading the numbers. Because you just proved my point?

1. 3DMark 11 Performance: 2133-2400 = 43 point difference. For future reference, this isn't a significant difference.

2. Aliens vs. Predator, 1920x1080 MQ: 2133-2400 = 1.1 point difference. FYI, that's FPS (frames per second). If you DON'T think that 1.1 FPS difference is only a tiny fraction better, I'd like to have what you're smoking.

3. F1 2012, 1920x1080 HQ: 2133-2400 = 1.1 point difference. OMG is this a pattern?!

4. Far Cry 2, 1920x1080 UHQ: 2133-2400 = 1.05 point difference. UH-OH! Looks like it's getting smaller! This is not looking good for your case.

5. Borderlands 2, 1920x1080 LQ: 2133-2400 = 1.9 point difference. WOW! Almost 2 WHOLE FPS faster!!!

Golly-gee-willackers! And that's by increasing the RAM frequency by ONLY 267 Mhz!!!
Imagine if we used DDR3-2933! Why we could get a whopping 5 FPS more!!! What an improvement that would be!!
 
My only beef with the article was the choice of MB / Memory. Spending $140 USD on an upper end Asus board is a bad idea if your building a low end "budget" system.

Better choice would be the FM2A85X Extreme6 which Toms has already reviewed.

http://www.newegg.com/Product/Product.aspx?Item=N82E16813157339

Does everything that Asus board will do for $108 USD.

The memory was a good choice though.

Please remember for anything related to an APU, memory speed is an absolute must. DDR3-2133 is the lowest I'd go nowadays, DDR3-1600 just can't offer enough and the APU really needs memory. 34GB/s is comparable to if not faster then most budget cards out there. Rightfully it gets beat by anything with dedicated GDDR5 memory.

DDR4 will most certainly bring faster performance for APU's. The biggest difference isn't that clock rate but the channel bandwidth opening up. Rather then two 64-bit channels going to two DIMMS we will have several micro-channels going directly to the onboard chips. This enables width's larger then 128-bit, also it allows different commands to be sent to each chip for read, write and refresh. Currently each channel can do one command at a time, this will be reduced to one command per chip. These changes are absolutely critical to applications that have amounts of parallel read / write operations, aka APUs.
 

A Bad Day

Distinguished
Nov 25, 2011
2,256
0
19,790
[citation][nom]merikafyeah[/nom]1. 3DMark 11 Performance: 2133-2400 = 43 point difference. For future reference, this isn't a significant difference.2. Aliens vs. Predator, 1920x1080 MQ: 2133-2400 = 1.1 point difference. FYI, that's FPS (frames per second). If you DON'T think that 1.1 FPS difference is only a tiny fraction better, I'd like to have what you're smoking.3. F1 2012, 1920x1080 HQ: 2133-2400 = 1.1 point difference. OMG is this a pattern?!4. Far Cry 2, 1920x1080 UHQ: 2133-2400 = 1.05 point difference. UH-OH! Looks like it's getting smaller! This is not looking good for your case.5. Borderlands 2, 1920x1080 LQ: 2133-2400 = 1.9 point difference. WOW! Almost 2 WHOLE FPS faster!!!Golly-gee-willackers! And that's by increasing the RAM frequency by ONLY 267 Mhz!!! Imagine if we used DDR3-2933! Why we could get a whopping 5 FPS more!!! What an improvement that would be!![/citation]

What about AA or anisotropic filtering? Care to explain why my laptop's GPU performance increased enough for one more AA or AF level upping after bumping up the memory clock rate?
 

Crashman

Polypheme
Former Staff
[citation][nom]palladin9479[/nom]My only beef with the article was the choice of MB / Memory. Spending $140 USD on an upper end Asus board is a bad idea if your building a low end "budget" system.Better choice would be the FM2A85X Extreme6 which Toms has already reviewed.http://www.newegg.com/Product/Prod [...] 6813157339Does everything that Asus board will do for $108 USD.[/citation]The motherboard was chosen to accurately represent performance scaling of a range of modules while using only one set, based on Asus' automatic secondary and tertiary timing optimizations. Price wasn't even a consideration because the article isn't about motherboards.

The article is about memory and APU performance scaling. The rest of the system was chosen to best highlight those components. You'd might as well be criticizing the use of a $160 power supply and $180 SSD.
 


Ohh I understand the reasoning behind it, it just makes the package seem more expensive then it normally would be. Though that board I referenced will do 1600~2400 easily including working with XMP. I figured you guys already had the Asus one laying around so was more cost effective that way.
 


Yes, at stock, DDR3-2400 wasn't much better than DDR3-2133 with slightly tighter timings. However, DDR4-2400 with tight timings is undoubtedly going to make a somewhat higher difference; it's probably be around 5% to 10% instead of about 2%. It's still not much to write home about, but hey, it'd be something.

I know it's been said before, but just consider overclocking even on Trinity and Llano right now. You can pump their GPU frequency up like 50-60% with ease. I guarantee that the diminishing returns will be pushed back quite a lot when overclocking is considered. The difference in memory bandwidth even at stock also helps with MSAA and similarly memory-bandwidth intensive performance features. If you do tests even with the GPU at stock using higher levels of such features, the differences in performance will increase between the memory bandwidth and latency jumps and tha'ts especially true with GPU overclocking considered.
 

Crashman

Polypheme
Former Staff
Even though you're right, misleading people is wrong. It's wrong for the same reason that I don't find these types of tests credible at other sites. First, answer this question: Which is more fair:

1.) To test DDR3-1600, 1866, 2133, and 2400, all at CAS 9
2.) To test DDR3-1600 CAS 6, DDR3-1866 CAS 7, DDR3-2133 CAS 8, and DDR3-2400 CAS 9

The answer of fairness favors the SECOND test. And the reason is because ALL the memory in method 2 has SAME LATENCY in real-time.

I know that you know latency is measured in cycles. And cycle time shortens as you increase frequency. In the above Test 2 example, all memory is taking 3.75ms to respond.

I've never seen another site consider this fact in their testing. But it's not just a fair way to test the effect of bandwidth, it's also a realistic way. Add 1-cycle to the theoretical Test 2 above, and you get the actual, available, economical memory configurations (1600 C7, 1866 C8, etc) used in THIS article.

Latency and bandwidth are of course two different things, so a PERFECT review would test both. Unfortunately, a "perfect" test would need to include some wildly-expensive low-latency DDR3-2400 such as you suggested. THIS article instead attempts to cover SOME of that issue by including both DDR3-1600 C7 and DDR3-1600 C9.

Thus, even if you're correct, the article is far more "right" because it represents both fairness and real-world applicability.
 
G

Guest

Guest
ATTENTION TO TOMSHARDWARE STAFF

I've seen numerous reviews from your site and many others on the affects of DDR3 RAM speed/timings on the AMD Trinity architecture. However, one thing I haven't seen reviewed is the effects of system memory dedicated to the APU for graphics. I have an MSI FM2 Mini-Itx box that I bought the end of last year equipped with an A10-5700 and 16GB 1866 DDR3. By default, the board only dedicates 512MB to the video. After tinkering with the settings (I had to enable "discrete card" settings, to adjust the dedicate video RAM ... not using a discrete card though), I was able to assign 1GB memory to the graphics portion of the APU. I noticed a difference in performance (although I don't have numbers to quantify). I would really like to see these reviews extended with settings that test 512MB, 1GB, 1.5GB, and 2GB dedicated to the APU.
 


In no way was I criticizing the article, or at least I didn't intend to ;)

I was simply trying to add a little context to the discussion involving how DDR4 would impact the situation. I agree with what you said and that the way this article measured/tested the bandwidth differences was proper and much more relevant and accurate than any similar tests that I've seen elsewhere.

My point was just that DDR4-2400 with tighter timings would have considerably lower latency than DDR3-2400 and that would change things a little for a stock IGP comparable to the A10-5800K's Radeon 7660D from what DDR3-2400 with currently real-world timings. Oh, and that DDR4 is especially important for improving the overclocking scaling of the APUs' IGPs.
 

InvalidError

Titan
Moderator
[citation][nom]blazorthon[/nom]Yes, at stock, DDR3-2400 wasn't much better than DDR3-2133 with slightly tighter timings. However, DDR4-2400 with tight timings is undoubtedly going to make a somewhat higher difference;[/citation]
What "tight" timings are you expecting? The hallmark of new DDR interface generations is HIGHER latencies at least in terms of cycle count that keep the overall access time roughly constant... DDR1 was 2-3 cycles, DDR2 was 4-5 cycles, most mainstream DDR3 is 9-10 cycles so mainstream 3GT/s DDR4 is likely going to be ~20 cycles.

The main reason for that is the slow CMOS process (a side effect from having to minimize leakage while maximizing capacitance) used in DRAM chips that requires more cycles to pipeline commands and data at high clock rates.

Another problem with pushing DDR4 beyond 3GT/s: how many 64bits-wide busses have ever gone beyond 2GT/s? Most interfaces have switched over to serial before getting there because keeping all the parallel data lines' bits correctly lined up becomes very difficult, particularly when signals have to pass through two mechanical connectors between source and destination. Because of that, high speed DDR4 might only allow one DIMM per channel.

The easiest way to feed IGPs would be to go with eDRAM... put 1GB of the stuff to store the front/back/Z/etc. buffers on-package, now you have the same 200GB/sec on-tap and whatever isn't used by the IGP can be used to cache frequently accessed OS structures like contexts.
 
I don't quite understand why DDR3-1866 wasn't included in the last graph, platform performance per dollar. It sometimes shows up priced very close to DDR3-1600, so it could be a great deal for APU builds in some situations. Other times, DDR3-2133 may be better.
 

alextheblue

Distinguished
[citation][nom]Crashman[/nom]It's in there, check the DDR3-1600 C9 vs DDR3-1600 C7 numbers. Basically, switching from C9 to C7 gives you a performance benefit roughly similar to one speed upgrade.[/citation]Err, what? The charts I see show a very tiny speed benefit. Certainly not as much as a step up to higher clocked memory.

Anyway, I'd like to see these tests run again but with GPU clocks pushed way up. I'd like to see if the performance gap grows, and whether or not DDR3-2400 enables even more headroom.
 

Crashman

Polypheme
Former Staff
[citation][nom]Sakkura[/nom]I don't quite understand why DDR3-1866 wasn't included in the last graph, platform performance per dollar. It sometimes shows up priced very close to DDR3-1600, so it could be a great deal for APU builds in some situations. Other times, DDR3-2133 may be better.[/citation]Availability: It was more-expensive than DDR3-2133 on the day the prices were calculated.
 
[citation][nom]InvalidError[/nom]What "tight" timings are you expecting? The hallmark of new DDR interface generations is HIGHER latencies at least in terms of cycle count that keep the overall access time roughly constant... DDR1 was 2-3 cycles, DDR2 was 4-5 cycles, most mainstream DDR3 is 9-10 cycles so mainstream 3GT/s DDR4 is likely going to be ~20 cycles.The main reason for that is the slow CMOS process (a side effect from having to minimize leakage while maximizing capacitance) used in DRAM chips that requires more cycles to pipeline commands and data at high clock rates.Another problem with pushing DDR4 beyond 3GT/s: how many 64bits-wide busses have ever gone beyond 2GT/s? Most interfaces have switched over to serial before getting there because keeping all the parallel data lines' bits correctly lined up becomes very difficult, particularly when signals have to pass through two mechanical connectors between source and destination. Because of that, high speed DDR4 might only allow one DIMM per channel.The easiest way to feed IGPs would be to go with eDRAM... put 1GB of the stuff to store the front/back/Z/etc. buffers on-package, now you have the same 200GB/sec on-tap and whatever isn't used by the IGP can be used to cache frequently accessed OS structures like contexts.[/citation]

That's not necessarily correct. At first, timings can be higher, but they get lower soon after on many modules. You'r comparing different versions of DDR at different frequencies merely when they came out. DDR3 modules of today with similar frequencies to some DDR2 modules can have tighter timings whereas the early DDR3 modules were a little looser.

GDDR5 interfaces are far over the 3GT/s mark and often as high as or higher than 7GT/s with some overclocking. Even 8GT/s and a little above has been achieved as stable overclocks by some cards. They manage to do it in very wide 256 and 384 bit buses too.

DDR4 is point to point. It doesn't put multiple DIMMs into each channel. That's a huge part of why it's so much better than DDR3 for IGPs because it allows one DIMM per *channel*, if we can even call them channels at that point. Four DDR4 DIMMs could have a huge improvement in bandwidth over four DDR3 DIMMs even if they have a similar or somewhat lower frequency and they can have a decent improvement in latency due to some of the differences between them and other DDR system memory technologies.
 

InvalidError

Titan
Moderator
[citation][nom]blazorthon[/nom]That's not necessarily correct. At first, timings can be higher, but they get lower soon after on many modules. You'r comparing different versions of DDR at different frequencies merely when they came out.[/citation]
I'm comparing mainstream memory with mainstream memory. DDR1-400 was 2-3 cycles, DDR2-800 was 3-5 cycles, DDR3-1600 is 9-10 cycles. Modern GPUs are have been engineered around GDDR5 which has 15-20 cycles latency so they have been designed to work with high latency and will not benefit much from shaving a cycle here or there as long as the memory subsystem can deliver bandwidth. DRAM chips are pipelined and burstable, commands can be issued while data transfers are in progress so higher latency does not have a significant impact on usable bandwidth when transfers are optimized to account for that, which is exactly what GPUs have been designed to excel at.

[citation][nom]blazorthon[/nom]GDDR5 interfaces are far over the 3GT/s mark and often as high as or higher than 7GT/s with some overclocking. Even 8GT/s is proven to be possible. They manage to do it in very wide 256 and 384 bit buses too.[/citation]
The differences between a PC and GPU:
1- the GPU is soldered to the motherboard, CPUs aren't - at least not until mainstream Broadwell
2- the GDDR5 chips are soldered to the motherboard, PC DDR4 is soldered on a DIMM which inserts into a mechanical socket which is soldered to the motherboard
3- GDDR5 chips are about an inch away from the GPU's BGA package, DIMMs are 2-3 inches away from the CPU socket and the DIMM socket+PCB add another inch
4- address and control lines on GPUs have a fan-out to 4-8 GDDR5 chips per channel, DIMM address and control lines fan-out to 16 chips

So managing to do 64bits through two mechanical interfaces over much longer distances is going to be considerably more difficult than doing it on GPUs. If cranking clock rates on parallel busses was as easy as it may sound, companies would prefer sticking with simplicity over the extra power and complexity of implementing high speed TMDS links.

The reality is that sockets and slots, particularly those of a very cost-sensitive nature such as mainstream PCs, are a signal integrity nightmare.
 
[citation][nom]InvalidError[/nom]I'm comparing mainstream memory with mainstream memory. DDR1-400 was 2-3 cycles, DDR2-800 was 3-5 cycles, DDR3-1600 is 9-10 cycles. Modern GPUs are have been engineered around GDDR5 which has 15-20 cycles latency so they have been designed to work with high latency and will not benefit much from shaving a cycle here or there as long as the memory subsystem can deliver bandwidth. DRAM chips are pipelined and burstable, commands can be issued while data transfers are in progress so higher latency does not have a significant impact on usable bandwidth when transfers are optimized to account for that, which is exactly what GPUs have been designed to excel at.The differences between a PC and GPU:1- the GPU is soldered to the motherboard, CPUs aren't - at least not until mainstream Broadwell2- the GDDR5 chips are soldered to the motherboard, PC DDR4 is soldered on a DIMM which inserts into a mechanical socket which is soldered to the motherboard3- GDDR5 chips are about an inch away from the GPU's BGA package, DIMMs are 2-3 inches away from the CPU socket and the DIMM socket+PCB add another inch4- address and control lines on GPUs have a fan-out to 4-8 GDDR5 chips per channel, DIMM address and control lines fan-out to 16 chipsSo managing to do 64bits through two mechanical interfaces over much longer distances is going to be considerably more difficult than doing it on GPUs. If cranking clock rates on parallel busses was as easy as it may sound, companies would prefer sticking with simplicity over the extra power and complexity of implementing high speed TMDS links.The reality is that sockets and slots, particularly those of a very cost-sensitive nature such as mainstream PCs, are a signal integrity nightmare.[/citation]

What was mainstream memory is irrelevant because the DDR3 memory we are talking about, DDR3-2400, is not mainstream and furthermore, what is mainstream memory changed over time even within each generation of DDR. For example, DDR3-800 to DDR3-1066 and later on 1333 were mainstream when it first came out, but now even DDR3-1600 and DDR3-1866 are mainstream and DDR3-2133 imay become mainstream before DDR4 is common.

That GPUs are different doesn't change much of anything. We've had GDDR5 for a long time and will probably have a new GDDR6 or GDDR7 out in a year or two which far exceeds GDDR5. If we haven't improved enough in the several years that we've had GDDR5 to make system memory with even half the transfer rates using more advanced technology as system memory, then we are fail, especially since we've already got it worked out anyway. Besides, we have already been told that DDR4 made improvements in latency as well as bandwidth and anyone who looks at the memory technologies over time notices that your timing numbers for each generation of DDR are not necessarily accurate (I'll get back on that later in this post). We even have information on how DDR4 is to achieve this publicly available.

That the GPU is soldered, memory is soldered, and differences in chip count are all insignificant to my point. They prove that it can be done and in the past, we've had FB-DIMMS with huge transfer rates despite being DIMMs no closer to the CPU which was also not soldered to the board than we have memory today. In fact, they were often much farther.

DDR4's increased capacity per chip would enable 2-4GB modules with four or eight chips with ease. Heck, we can do that with current DDR3 chips. Chip count is absolutely not a problem.

For examples towards what I said earlier about timings, let's look at some common CL-tRCD-tRP timings for common official frequencies that overlap between DDR2 and DDR3.

800MHz-
DDR2= 4, 5, 6 (usually 5, but both 4 and 6 are both common)
DDR3= 5, 6 (I can't find a lot of examples still around)

No serious loss there for DDR3. Sure, it's something of a loss, but not much of one and these were all old modules rather than new ones.

1066MHz-
DDR2= 5, 6, 7 (usually 5)
DDR3= 6, 7, 8 (usually 7)

Again, no serious loss for DDR3. Furthermore, many newer high-frequency DDR3 modules with decent timings can be underclocked and timing-tightened much tighter than even DDR2 modules achieved. So, although at first, DDR2 had a slight advantage, nowadays, that advantage belongs to DDR3. DDR4 made far more changes and improvements in the technology than any DDR generation did over its predecessors in the past and is expected to have such an advantage from the very start. Even if it doesn't, it is even more likely to have it by the time that it gets into AMD's APU systems. Again, even if it doesn't by then, it's only a matter of time before it does.

Yes, increased complications made in each generation of DDR did mean higher timings initially, but technology improvements after the early models have always allowed the newer generation to overtake the older generation, even if not until higher frequency modules are out and they'd need to be underclocked to match frequencies with the previous generation for an apples to apples comparison. It doesn't even necessarily take expensive modules either. For example, Samsung has a DDR3-1600 kit that overclocks ridiculously well and underclocks similarly well. Some low voltage kits are also excellent about this.

Back to the chip density argument- DDR3 chips are made up to 1GB per chip. We might even have some 2GB chips going around in the server market. 512MB chips are common in 8GB DDR3 memory modules and many video cards have 512MB chips as well. DDR4's increased density will make 1GB chips probably be common by the time DDR4 trickles down to consumers, let alone into APU systems and even if not, 512MB chips are plenty to making say four quad-chip modules for a quad DIMM, 8GB kit with great frequencies and timings.

You can say that companies would prefer sticking to more simplicity, but it's already started. DDR4 might not have full sixteen chip modules running at its top speeds, but it's very unlikely that it won't get modules at those top speeds at all and regardless, it doesn't need to to prove my first point about DDR4-2400. That's an easy target.

Worst comes to worst, memory modules could implement a more PCIe- style of achieving high speeds with multiple serial lanes instead of one wide parallel bus. Rambus's XDR2 memory modules are another good example of pushing the limits of SDRAM-based memory modules even without that, granted they don't see much use given how Rambus is almost universally hated and/or not trusted as a business partner.

EDIT: Sorry this reply took so long; I wanted to be more thorough in backing up my claims.
 

InvalidError

Titan
Moderator
That the GPU is soldered, memory is soldered, and differences in chip count are all insignificant to my point. They prove that it can be done and in the past, we've had FB-DIMMS with huge transfer rates
You do realize that FBDIMMs are actually SERIAL using TMDS signaling, right? This exactly what I said... serial interfaces have a much easier time getting through sockets and slots at very high speeds because every bit can be locked on individually, no need to worry about clock skew between bits the way wide high speed parallel interfaces do.

DDR uses strobe signals to break down the 64-bits DIMM bus into separately "clocked" 8bits groups but even this starts getting touchy beyond 2GT/s: if you have a 100ps setup and hold time on the bus, you have less than 300ps of wiggle-room left for everything else.

Again, no serious loss for DDR3. Furthermore, many newer high-frequency DDR3 modules with decent timings can be underclocked and timing-tightened much tighter than even DDR2 modules achieved.
But the overall latency x period yields roughly the same effective latency in terms of nanoseconds and gimping your clock rates achieves nothing more than sacrifice bandwidth. In benchmarks, higher bandwidth with lower effective latency practically always wins even if latency is numerically higher.

Most DDR2-800 (and even DDR2-1066) RAM I can see on Newegg is 5-5-5, not 6 or 7 while DDR3-1066 is overwhelmingly 7-7-7.

Lets crunch some numbers...
- DDR2-800-5 = 5 / 800 = 6.25ns effective latency
- DDR3-1066-7 = 7 x 1/1066 = 6.5ns
- DDR3-1600-9 = 9 / 1600 = 5.625ns
- DDR4-2133-14 = 14 / 2133 = 6.5ns (based on photos of pre-launch packaging)

The ~50% cycles latency bump at the transition boundary between technologies usually favors the older stuff in benchmarks. DDR3-1600 may have higher latency cycle count than DDR2 or lower speed DDR3 but it still has lower effective latency even without having to pay for premium low-latency bins at higher frequency and latency cycle count.

If the main objective is feeding an IGP, it makes no sense to sacrifice bandwidth for lower latency and in most CPU benchmarks, it makes little to no sense either when the clock rises high enough to offset latency cycle bumps. For mainstream RAM, whatever the current definition of mainstream may be, we we have been around the 6ns mark for most of the past 10 years.

As for "chip count not being a problem", interface width and JEDEC are.

Chips with wider data busses draw more current from their IOB power plane, need their internal architecture to be that much wider and likely slower. This means wider chips run significantly hotter (2/4/8X as much stuff happening inside, may actually require heat spreader) and also means it becomes more difficult to adequately bypass/filter the power supply to support higher speeds.

The other thing is that JEDEC defined the DIMM interface as having one strobe signal per 8bits data group so having one 8bits chip per strobe signal (or a pair for double-sided DIMMs) is practically dictated by the standard itself - try finding a 1GB DDR3 DIMM that isn't 8x128MB configuration even though today's 1GB ICs would make a single-chip DIMM theoretically possible.

Many things sound nice in theory but hit brick walls in practice.
 
G

Guest

Guest
@soda, i'd argue turning down the graphic settings is irrelevant. Most of the bottlenecking sc2 gets is from the cpu. Great cpu means minimum frame rates that never dip below 60.

Shit cpu with really good graphic card means high frame rates but gameplay that goes as low as 15 frames due to shit cpu.


If i was building for sc2, i'd skip the gpu and make sure i get a really solid processor.
 

natoco

Distinguished
May 3, 2011
82
0
18,630
Wheres the prices of that memory you used for benchmarks? Most bottom of the range descrete gpus use ddr5 let alone ddr4. What a waste of money, really how cheap do you want to go when your going to uses the thing for years, alot of people spend more than the cost of a gpu at the supermarket per week and that does not last years so why be such scrooge mcducks. When food costs more than a gpu and you whinge about cost you need a reality check.
 

Crashman

Polypheme
Former Staff
[citation][nom]natoco[/nom]When food costs more than a gpu and you whinge about cost you need a reality check.[/citation]Perspective? I spend thousands of dollars a month on "monthly bills" and have a hard time coming up with the copay to visit a dentist. Why would such a small fee hold me back? Because thousands of dollars in "monthly bills" leaves me little discretionary income!
 

Crashman

Polypheme
Former Staff
[citation][nom]jaideep1337[/nom]Hmm seeing the benchmarks I wouldn't go over 1866 memory since the performance after that is pretty minimal and the costs come in higher too.[/citation]2133 is often cheaper than 1866.
 

mohit9206

Distinguished


http://www.xbitlabs.com/articles/graphics/display/amd-trinity-graphics_6.html#sect3
Radeon HD 7660D performs worse than the Radeon HD 6570.
Below is another review from Rage3D.com which basically compares the A10-5800k (Radeon HD 7660D) vs. the A10-5800k DG (Radeon HD 7660D + Radeon HD 6670 dual graphics), Core i3 + Radeon HD 6670 and lastly Core i7 + Radeon HD 6670. There are other configurations though.

http://www.rage3d.com/reviews/fusi [...] x.php?p=11

As can be seen, the A10-5800k performs worse than the A10-5800 DG, Core i3 + Radeon HD 6670 and Core i7 + Radeon HD 6670. The exception is Civilization V were all four provides similar performance. However, the A10-5800k DG does perform well.

If you continue to insist that the integrated Radeon HD 7660D is as as good as a Radeon HD 6670 w/ DDR3 RAM (probably about 10% slower than w/ DDR5 RAM), then you should back it up with some benchmarks.
 
Status
Not open for further replies.