AMD Trinity On The Desktop: A10, A8, And A6 Get Benchmarked!

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]Tomfreak[/nom]u dont need a gigantic size slow L3, they dont help much. An average optimal size L3 is enough, that'll free up some die space for additional cores. Bulldozer is a crap achitechture in IPC. They either need to clock VERY high clock rates or scale more cores to beat Ivy bridge. There is no way u can improment a IPC by that much margin to be competitive against Ivy bridge. Scaling core count is probably the best short cut to win Ivy. Back in those days people are getting a higher clock dual CPU over quad core ones. Look what happened now. Quad core end up the winner.[/citation]

Don't talk like you know anything about CPU architectures. Bulldozer is an excellent architecture. However, AMD ruins its advantages by implementing it in what could be almost the wosrt possible and least optimized way. Piledriver fixed a few moderate problems and shows some serious improvements. Now think about what we would see if some of the bigger problems sere solved. Bigger problems would be things like how the procesors are designs. Currently, AMD switched over from expert engineer teams who designed the processors transistor-by-transistor like in Phenom II over to a computer-designing method...

Guess what, computers suck at designing highly optimized CPUs right now. They know the fastest way to design a processor, not the way to design the fastest processor. That increases die size and power consumption by about 20% while decreasing performance by about 20%... A more than 40% drop in power efficiency. Then there's also AMD's huge latency cache that they've had for far too long without a serious overhaul in cutting down the latency and the ever-decreasing memory controller bandwidth efficiency at a given frequency, a very bad combination.
 

how did you know that trinity apus do not contain resonant clock mesh technology? i didn't know that. it'd be helpful if you provided some sort of link or proof. t.i.a.

so... can trinity 'dual graphics' with lower end gcn cards like 7750 or 7770?
since top end trinity chip might be priced at $150~ and seems to support x8+x8 crossfire, will it be able to run two gaming gfx cards e.g. 7850 in crossfire without bottlenecking them?
 

martel80

Distinguished
Dec 8, 2006
368
0
18,780
Would it be such a big problem to put a single Ivy Bridge CPU (CPU + MB comparable in price to the A10 platform) into the charts?

Did AMD request no direct comparisons to Intel?
 

Dan_H

Distinguished
Aug 9, 2010
9
0
18,510
[citation][nom]martel80[/nom]Would it be such a big problem to put a single Ivy Bridge CPU (CPU + MB comparable in price to the A10 platform) into the charts?Did AMD request no direct comparisons to Intel?[/citation]
they don't know the price point of the Piledrivers yet, therefore no way to choose which Intel processors to compare.
 

tomfreak

Distinguished
May 18, 2011
1,334
0
19,280
[citation][nom]blazorthon[/nom]Don't talk like you know anything about CPU architectures. Bulldozer is an excellent architecture. However, AMD ruins its advantages by implementing it in what could be almost the wosrt possible and least optimized way. Piledriver fixed a few moderate problems and shows some serious improvements. Now think about what we would see if some of the bigger problems sere solved. Bigger problems would be things like how the procesors are designs. Currently, AMD switched over from expert engineer teams who designed the processors transistor-by-transistor like in Phenom II over to a computer-designing method...Guess what, computers suck at designing highly optimized CPUs right now. They know the fastest way to design a processor, not the way to design the fastest processor. That increases die size and power consumption by about 20% while decreasing performance by about 20%... A more than 40% drop in power efficiency. Then there's also AMD's huge latency cache that they've had for far too long without a serious overhaul in cutting down the latency and the ever-decreasing memory controller bandwidth efficiency at a given frequency, a very bad combination.[/citation]If it is that good they would been running close to Sandy bridge performance already. As far as I can tell now it is, Trinity fix some of the prob that the bulldozer had, but gap between Ivy is still very large. At these moment it is going to take them a lot of effort to comeeven close to Ivy bridge IPC. Smaller L3 release some room for die area as well as power headroom. Adding 1 more module to 10 cores *might be probably the best chance closing the gap against Ivy bridge now.
 
[citation][nom]Tomfreak[/nom]If it is that good they would been running close to Sandy bridge performance already. As far as I can tell now it is, Trinity fix some of the prob that the bulldozer had, but gap between Ivy is still very large. At these moment it is going to take them a lot of effort to comeeven close to Ivy bridge IPC. Smaller L3 release some room for die area as well as power headroom. Adding 1 more module to 10 cores *might be probably the best chance closing the gap against Ivy bridge now.[/citation]

If it is that good and AMD implemented it in such a poor way, then how could it compete with Intel without the core per module disabling mod? It doesn't perform like that because of AMD's very poor implementation in how they designed the die mask. How could adding more cores make any difference at all if most consumer software that is multi-threaded still can't make use of a huge ten cores? Fixing the crap implementation is the only thing that AMD can do with their modular architecture now. Smaller L3 would leave more room, but that doesn't matter. The gap between Bulldozer and SB/IB seems large because AMD implemented it very poorly.

First of all, there's the design, cache, and memory controller problems that I've mentioned. Just how much would performance improve if all three of these things were fixed? I don't know for sure, but it would be an incredible improvement. Reverting back to the transistor-by-transistor designing methods alone should allow for a more than 40% improvement in power efficiency from the 20% reduction in power usage and 20% improvement in performance. That means that the other improvements would, at the very least, need to improve power efficiency by a lot more than ~40%.

Using less L3 could help somewhat, but it could also hurt. The eight core FX CPUs, with their full 8 cores, are very fast for highly threaded work and in their current implementations, not so good at all with quad threaded work and less unless you disable half of the cores (1 per module). A lot of highly threaded work can like large caches. Using less L3 to add in more cores for such workloads could be almost counter-productive.

Piledriver fixes some of the moderate problems with Bulldozer and yes, it shows improvement, yet not enough improvement to catch SB and IB except maybe with the disabling one core per module mod. However, it also shows that AMD has not fixed the largest problems yet. It also shows that AMD, for whatever reason, has still yet to sell a CPU that has one core per module. They could compete against the i5s very well if AMD would make some like that.

AMD could also make have their CPUs make their second core per module act like a logical thread, telling Windows not to load it up as if it is a full core. This would make the disabling one core per module mod no longer necessary, let AMD have another way to differentiate their processors, and make AMD more competitive in lightly threaded performance. Heck, AMD wouldn't even need to make any of the improvements that I've mentioned if they did this with Piledriver CPUs.
 

gto127

Distinguished
Jan 8, 2008
158
0
18,680
To Blazerthorn
Thanks for the insight. This is very interesting what your saying about disabling the cores to get better performance. Do you have a link where someone has tested this. If it benched well enough I might consider a build based on this. With new processors coming in the bullzozers will surely be discounted soon.
 
[citation][nom]gto127[/nom]To BlazerthornThanks for the insight. This is very interesting what your saying about disabling the cores to get better performance. Do you have a link where someone has tested this. If it benched well enough I might consider a build based on this. With new processors coming in the bullzozers will surely be discounted soon.[/citation]

http://techreport.com/articles.x/21865

It's not a very full investigation of it and how it applies to gaming CPUs, but it's interesting and raises questions none the less. Xbit labs also addressed this when Bulldozer first came out, but I don't know if they did anything with the concept beyond a quick proof-of-concept test.
 

hetneo

Distinguished
Aug 1, 2011
451
0
18,780
[citation][nom]shin0bi272[/nom]Thats what I was wondering... every time you get an intel cpu review they always throw in an amd or two for comparison. Why didnt they do that here? Cant make an informed purchase if you compare 3 versions of the same car make and model when there are other makes and models out there to look at.Oh and Jill... amd only has 10% of the market even with the APU's out there. So if they fail intel only goes from 89-99% of the market... dont see them changing their pricing plans over that.[/citation]
Market share is not very good thankful thing to debate in sense of pricing politic of companies. 10% of market share doesn't mean that AMD is selling 8-9 times less CPUs than Intel, but that sale is 8-9 worth 8-9 times less than Intel's. Don't forget that Intel has SKUs priced at $1K dollars and to reach same market share AMD has to sell ten $100 priced CPUs for every Intel $1K CPU sold. Yes, AMD doesn't have SKU above $200, and that's where price matching is done, in sub $200 price range. Intel is under pressure to offer same performance in that price range. There's not a single Ivy Bridge CPU in that price range, best CPU sub-$200 Intel offer is i5 2310 Sandy Bridge CPU. FX-8150 will blow away that CPU, new APUs too will. Unfortunately everyone was/is pitting $200 AMD CPUs against i5 2500 (shame on you too Tom's) and up, CPUs that are in $250+ price range which is not the same class after all ($50 price difference might not sound as much, but it's 25% difference actually).
 
[citation][nom]Hetneo[/nom]Market share is not very good thankful thing to debate in sense of pricing politic of companies. 10% of market share doesn't mean that AMD is selling 8-9 times less CPUs than Intel, but that sale is 8-9 worth 8-9 times less than Intel's. Don't forget that Intel has SKUs priced at $1K dollars and to reach same market share AMD has to sell ten $100 priced CPUs for every Intel $1K CPU sold. Yes, AMD doesn't have SKU above $200, and that's where price matching is done, in sub $200 price range. Intel is under pressure to offer same performance in that price range. There's not a single Ivy Bridge CPU in that price range, best CPU sub-$200 Intel offer is i5 2310 Sandy Bridge CPU. FX-8150 will blow away that CPU, new APUs too will. Unfortunately everyone was/is pitting $200 AMD CPUs against i5 2500 (shame on you too Tom's) and up, CPUs that are in $250+ price range which is not the same class after all ($50 price difference might not sound as much, but it's 25% difference actually).[/citation]

The FX-8150 is not better than the i5-2310 in gaming performance with both at stock. If you disable one core per module of the 8150 and then overclock both CPUs, then they are probably very close. Maybe the 8150 can pull ahead, but even then, it would not blow away the i5 and the power usage difference would probably be significantly greater than the performance difference. At stock, the i3s tend to beat the 8150. BF3 MP might favor the 8150, but that could be the only game that does.

Thinking that cost of a CPU defines it's performance is wrong. The price does not define performance. If it did, then the i3 versus FX-8150 example would be very contradictory to it. For highly threaded performance, AMD is killing Intel in performance for the money, but for most people, that isn't very important.
 

you skipped the price points fx cpus launched at. the prices came down recently after amd cut prices across their fx lineup. fx 8150 was costlier than a core i5 2500k for a long time. fx 8120 was costly (for its performance) too. you could get $180 core i5 2500k at microcenters - that's under $200 margin, yes? iirc it went as low as $150-160 at one point.
2310 vs 8150 - depends on intended usage. if you're trying to run 4 vms on a 2310, the 8150 will seem like a better choice. if you building a basic pc for casual gaming and browsing, 8150 would be a huge powerhogging overkill. if you're building a gaming pc, the 8150 will have to be overclocked on all cores to reach 2310 level gaming performance.
the new desktop apus will only outperform a 2310 in terms of igpu performance, nothing else. the cpu parts of the new apus can barely outperform their predecessors (llano) lol.

disabling half of 8150 turns it into a 41xx cpu - check the sub $200 gaming cpu round up for 4100's overclocked performance.
amd's not killing intel in highly threaded performance per dollar. people who demand highly threaded performance from cpus, most of them simply opt for intel core i7. blame it on marketing or sheer performance advantage if you want.
right now amd's biggest money maker is apu e.g. brazos and mobile llano.
 

SuperVeloce

Distinguished
Aug 20, 2011
154
0
18,690
Disabling one core per module does not save the problem. Integer cores in bulldozer are very weak compared to intel. Yes, performance can be affected, when Windows sends two threads to one module despite of other modules being free, but sending every thread to its own free module does not make a miracle happen.
 

upgrade_1977

Distinguished
May 5, 2011
665
0
18,990
I was gonna say after looking at the graphs, that it's great that we can buy an apu and be able to play games at 1080p, but then I read the article, and apparently the minimum FPS kill the experience.:(

Gonna be big news I think when they finally do play smoothly at 1080p. Means all those people who want to play PC on big screen t.v.'s won't have to invest an arm and a leg to build a high end system. You can save a lot of money on the PSU and GPU, and still have future upgrade ability. Hence, I see these as a console replacement for people who want to get into PC gaming.
 
[citation][nom]De5_roy[/nom]you skipped the price points fx cpus launched at. the prices came down recently after amd cut prices across their fx lineup. fx 8150 was costlier than a core i5 2500k for a long time. fx 8120 was costly (for its performance) too. you could get $180 core i5 2500k at microcenters - that's under $200 margin, yes? iirc it went as low as $150-160 at one point.2310 vs 8150 - depends on intended usage. if you're trying to run 4 vms on a 2310, the 8150 will seem like a better choice. if you building a basic pc for casual gaming and browsing, 8150 would be a huge powerhogging overkill. if you're building a gaming pc, the 8150 will have to be overclocked on all cores to reach 2310 level gaming performance.the new desktop apus will only outperform a 2310 in terms of igpu performance, nothing else. the cpu parts of the new apus can barely outperform their predecessors (llano) lol. disabling half of 8150 turns it into a 41xx cpu - check the sub $200 gaming cpu round up for 4100's overclocked performance.amd's not killing intel in highly threaded performance per dollar. people who demand highly threaded performance from cpus, most of them simply opt for intel core i7. blame it on marketing or sheer performance advantage if you want.right now amd's biggest money maker is apu e.g. brazos and mobile llano.[/citation]

Disabling half of the cores does not turn it into a FX-4xxx CPU. Disabling half of the modules does that. Disabling one core per module gives each remaining core a boost in performance because it has the resources of the entire module to work with instead of sharing them with another core. AMD does not have any CPUs that take advantage of this at stock at this time.

Also, MC deals are not an option for everyone. Not all of us can use microcenter.
 
[citation][nom]blazorthon[/nom]Disabling half of the cores does not turn it into a FX-4xxx CPU. Disabling half of the modules does that. Disabling one core per module gives each remaining core a boost in performance because it has the resources of the entire module to work with instead of sharing them with another core. AMD does not have any CPUs that take advantage of this at stock at this time.[/citation]

http://techreport.com/articles.x/21865

Also, a quote from a Tom's mod on the subject:
I’m out of time, but so many comments deserve attention.

@ Jtt283 & RedJaron – Feedback noted. Thank you.

@ Blazorthon – Thanks, appreciate any mobo research you are willing to do. Come next SBM, I’ll have a limited time to decide.

Your FX-8120 core/module question is totally worthy of a deeper look, but I’m not sure who among us has the time to pull it off any time soon. I’m committed to other projects outside the SBM, and not sure I could pull off a worthy FX-8120 build at $500. Also a problem, in an SBM all overclocked tests must run the same settings. If we disable cores for games, I’d need to leave them disabled for Apps too. From what I have read (not tested), disabling 4 cores … leaving 4 cores on 4 modules, does outperform FX 4100 (4 cores on 2 modules). I understand frustrations of games/OS not taking advantage of the architectures threading abilities, but otherwise feel overclocked FX can game right now as is. I’d lean towards single-GPU configs of any size, and while low resolution bottlenecks will exist, it more about the experience at the intended resolution/details. But I personally own and have tested so many Phenom II’s that I also have to admit FX didn’t impress me enough to warrant further attention. I’d want to tinker on ALL hardware (and share data) given the time/opportunity, but that’s far from reality.


Sorry, but I need to check out of this thread for a while. Work to do. Thanks ALL for the discussions.

http://www.tomshardware.com/reviews/gaming-pc-do-it-yourself-geforce-gtx-560,3216.html

Near the end of the comments on the last or second to last page of comments.
 

disabling half of the modules... it's possible? i didn't know that. that'd certainly give the modules some headroom. has someone benched an fx like this?
 


Disabling half of the modules makes an FX-8150 basically an FX-4100, except I think that it has slightly higher Turbo, if I remember correctly.

We have cores 0 and 1 in module 0, cores 2 and 3 in module 1, cores 4 and 5 in module 3, and cores 6 and 7 in module 3. Disable one core from each module and each remaining core gets a large boost in performance per Hz due to having a full module-s resources available to a single core instead of being shared between two. Disable a full module and this does not happen, it basically becomes an FX of a lower core count. There is one benchmark link that I linked at the top of my comment two posts above this one.

Disabling specific cores can be done if you use a motherboard with BIOS support for it. I know that at least two Asus AM3+ boards can do it, but I think that not all AM3+ boards can.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980


I was gonna ask how you could say that the Bulldozer architecture is an "excellent" architecture, but that point about disabling one core per module does seem like a valid point of just poor implementation. Mind if I ask what your profession is, aside from a hardware enthusiast just like most of us. :)) Do you work in the (microprocessor) industry or something?

I suddenly wondered as well how disabling one core per module increases performance. Yes, each core would have the whole module's resources sans the other core that was disabled, but how does doing this equate to a lower quad-threaded application performance? Is there a problem with thread scheduling/assignment that it assigns all 4 threads to 2 modules (4 cores) that they're choking up just 2 of the 4 floating-point units that the CPU has to offer?



Oh, SuperVeloce pretty much mentioned that idea already. But anyway my response to what he said is that reason I gave above.

AMD could also make have their CPUs make their second core per module act like a logical thread, telling Windows not to load it up as if it is a full core. This would make the disabling one core per module mod no longer necessary, let AMD have another way to differentiate their processors, and make AMD more competitive in lightly threaded performance. Heck, AMD wouldn't even need to make any of the improvements that I've mentioned if they did this with Piledriver CPUs.

That might not only make the mod unnecessary but also potentially yield better performance than the modded state right? Hm... I have some ideas in my head about this, but would you mind elaborating how Windows deals with logical threads? Like does it refrain from accessing it when all the resources of its main core are already fully swamped (redundancy for emphasis)?

I wonder. Wouldn't it be awesome if there were some Bulldozer chips that had a (mildly) defective core in one of the modules that they sell it under a new SKU, or maybe. for example, a 4 moduled part as a 3 moduled part. Then you could reactivate the 4th module with the non-defective core active. This might take some work on your part and have the appropriate options available, like being able to choose which core on the module would be active. But with the "new SKU" idea, they'd just sell it as a 1 core per module part. Hehehe... Just my imagination gone wild. They probably do the latter (deactivate the defective module entirely). And blaz, you did mention that AMD chose a faster production design that doesn't allow for the best implementation. If that were the case, maybe the number of defective processors they get may be less (as part of this faster method)? It's just a possibility that popped in my head. BTW, where did you get that info about them having computers design the chips implementation?

Also, about pricing, isn't Micro Center a certain brand of an electronic store or is it used to describe a store that sells really discounted stuff? Does anyone one know how they're able to provide such big discounts like De5-roy mentioned?
 

shloader

Distinguished
Dec 24, 2001
231
0
18,690
"Despite common data rates and timing settings, the Llano-based APU gets more out of its dual-channel DDR3 memory controller than Trinity. AMD’s newer design technically supports higher settings, though, meaning you should be able to get up to DDR3-2133 using one slot per channel. "

So... just what settings were the Trinity APUs set to? I would assume the Llano would be set to 1866MHZ and Trinity to 2133MHZ. Sure, a test at exact same settings for memory will be telling of memory controller performance but I'd also like to see an apples to apples, both at best stock settings comparison. Simple reason; that's how I'd intend to use it at a minimum. Actually my A8-3850 has the memory running a little higher.

Whatever such tests would reveal the simple fact is, with consideration to the need to change the motherboard out, I feel no compelling desire to upgrade to the trinity. Maybe next year with the improvements coming for GPU memory performance.
 
@army ant 7: afaik there is only one desktop bulldozer cpu die called orochi. the best binned one is labelled fx 8150 with everything enabled and the lower numbered ones are just derivatives. an fx 6xxx is a 4 module bulldozer with one module lasered off. an fx 4xxx is a 4 module bulldozer with 2 modules lasered off. an a6 5400k is an a10 with one module lasered off and clocked lower to fit tdp limits. the defective bits are not possible to enable in this case because if it was possible, one could go as far as buying an fx 4100 and unlock it into a top of the lineup fx 8150 ($100 vs $250+). amd (or any corporation) won't risk that.
amd usually makes one die per platform and sells lower derivatives as lower specced cpus. that's why one could 'unlock' a phenom ii x2 into an x4 or in zosma's case, an x4 into an x6 cpu.
imo, the number of defective cpus isnt less with zambezi - it's the opposite. thank glofo and amd's fx designers.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980


Thanks for that info! When you say "lasered off," I get the image of them intentionally wrecking the modules (or cutting their connections) not found under a specific SKU's specifications. Hm... I'm guessing the decision to producing the same chip but modifying it later on for selling is more of a business decision, to cut costs and stuff, and I do find it reasonable. Wow... I do wonder where you guys get your info on this. It's like you guys actually work at AMD. :)) Just kidding of course.

"Orochi" pertains to the die? I didn't know they gave the dies names.

Just an idea if what's above were the case. Would that mean the SKU's with modules disabled have more room (the "corpses" would absorb some of the heat I would think) to dissipate heat and possibly, better overclockability. Better than if they if they had a smaller die size with just whatever's active on it or something.

Anyway, my previous post is still open to answering in case it gets missed out.
 
Status
Not open for further replies.