AMD Trinity On The Desktop: A10, A8, And A6 Get Benchmarked!

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


You are both wrong. It is 26.72 percent.

[:lutfij:4]



 

K2N hater

Distinguished
Sep 15, 2009
617
0
18,980
0
Just wish the A6 was a bit more efficient so we could build tiny PCs with 120W power supplies. Concerning peak power draw a loaded A10-5800K PC would probably run great with a 200W PSU while the considerably slower A6-5400K wouldn't work with anything lower than 150W.
 

kubunteando

Distinguished
Sep 3, 2008
8
0
18,510
0
The overclocking sections said it will be rechecked later.
For poor gamers like me it would be interesting to know whether Powertune supports Trinity APUs?
Powetune is an easy way to overclock GPUs, but is it supported for Trinity?

A comment for this thread: PicoPSUs are more efficient than switched PSU plus they are totally silent. So I recommend 120W PicoPSU for building tiny silent computers with ITX boards. I have one and it is the best one I have built this far. I only need Trinity APU to be able to play current games.
So please AMD don't make us wait long.
 

element1981

Distinguished
Sep 26, 2009
9
0
18,510
0
I'm far more interested in the mobile versions of this chip that will allow some light to medium gaming on the road with better battery life in an affordable laptop.
 

triny

Distinguished
Feb 2, 2012
450
0
18,790
3
I don't expect the holy grail from AMD with Trinity not even with Kaveri ,by the time excavator arrives hopefully more software developers will be on-board.
Intel too should be working on this by then.


second time reading the article
Thanks Chris

 

youssef 2010

Distinguished
Jan 1, 2009
1,263
0
19,360
32
[citation][nom]Article[/nom]AMD also makes it a point to note that overclocking its x86 cores is far less effective than tuning the graphics engine. And while the Radeon HD 7660D on our -A10-5800K is set to operate at 800 MHz by default, it's running beyond 1 GHz in AMD's lab. Unfortunately, while our ASRock-based platform has a field for graphics overclocking, setting it higher currently doesn't seem to affect performance. We'll need to revisit overclocking later this year.[/citation]

Incomplete, as it always depends on the application. I totally appreciate AMD's APU initiative and I respect what they're trying to achieve. But I can't help thinking that Intel suits our needs better by offering a class of CPU performance way beyond AMD's current capabilities.

As a side note. AMD was able to achieve the exact 15% IPC improvement over Bulldozer so, I don't see why they couldn't have aimed a bit higher and give the current Intel CPUs a run for their money.
 
i'm not impressed with how desktop trinity's cpu is shaping up. but that was expected - amd is offering passable cpu performance with good igpu.
saw the biggest problem with memory bandwidth. imo amd's integrated memory controller needs to get much better. the igpu is more memory bandwidth-starved than the last time (llano). even sb pentiums get 20 gb/s with ddr3 1333 ram. amd's igpu needs more bandwidth. i wish they'd get off 'moar cores' 'moar hsa' bandwagon and get on the 'moar ram bandwidth' bandwagon.
mobile trinity is better than desktop's.
 

Teslarifle

Distinguished
Nov 9, 2011
9
0
18,510
0
I'm a little confused, is anything from this next generation of AMD processors expected to dethrone or at least compete with the i-2500k for best value gaming CPU? If not, by the time AMD gets around to doing so, Intel will be releasing the successor to Ivy Bridge anyways. Hopefully AMD is more generous with their thermal solution than Intel was with Ivy; could that sort of evaluation be included in the next article once the CPU's commercial versions are released?
 

nerrawg

Distinguished
Aug 22, 2008
500
0
18,990
1
Wow, I bought a phenom II X2 550 BE back in the start of 2009 - cost me £55 ($85). Its been unlocked (X4 B50) and running at 3.8Ghz 24/7 ever since! Its not often 3 years go by in todays tech age and there still isn't an upgrade worth buying for the same price. Great that piledriver looks to be a bit better than bulldozer, but still with the way things look now I think my next upgrade will have to be intel if AMD can't get their CPU act together. I don't really understand the APU from a gaming perspective - after all if you want to go cheap you buy a console. PC is the realm of the enthusiast (or nerd lol) and we demand the best!
 

vilenjan

Distinguished
Aug 22, 2010
514
0
19,060
35
At least AMD should win out in the budget to mainstream mobile/HTPC setups. Intel will still outsell them here due to the buying public's jaded perception. Still at least AMD isnt throwing the towel in all corners of market.
Also everyone should keep in mind that desktop pile-driver cpus will have resonance clock tech, which the trinity APUs lack, this should account for another 10% performance increase, and the wqill ship with sizable L3 caches. SO we should see a 20% improvement from desktop pile-drivers, and about 20% improvement from desktop bulldozers.
Will this be enough to beat ivy bridge? nope. But if AMD prices thier cpus well, it can have a real chance in the budget to mainstream market, competing with core i5 and lower cpus.
 

cp8086

Distinguished
Oct 25, 2010
8
0
18,510
0
[citation][nom]mayankleoboy1[/nom]in the OpenCL Winzip benchmark, when openCL is enabled the workload is done only by the iGPU or the CPU as well ?[/citation]

By now I checked with 4 AMD cards (HD 5770, 5830, 6670, 6850) that GPU usage was always zero with the OpenCL code path.

Please read my post on false claims about GP-GPU:
http://www.tomshardware.co.uk/forum/357179-15-opencl-faster-compression

[citation][nom]mayankleoboy1[/nom]i mean what is the processor usage during the benchmark ? are all CPU cores used? or only one?[/citation]
Winzip is multithread, but doesn't saturate all cores; with OpenCL option enabled, there is a bigger CPU usage.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
0


day 1 Dollar = 3 1/3 bananas
day 2 Dollar = 5 bananas

3 1/3 bananas + 50% = 5 bananas

How do you get 50%?
(5 - 3 1/3) / 3 1/3 = 50%

Edgar, that looks like the formula Chris gave.
(131 - 88) / 131 = 32.8%

Performance in this case seems to be how much time it takes to accomplish the task, and how I see it, the 32.8% Chris gave was more precise ('coz you said 33% not 32%) but just to be particular for accuracy's sake, it should be rounded of to two significant figures, which 88 sec only has, as I've learned in my high-school science subjects, so 33% is more valid. An improvement is how much LESS time it takes compared to the worse 131 seconds. The 49% you gave a earlier on was how much less calculations the 131 second sample would do in the same timeframe as the 88 second sample would've accomplished, and the opposite, how much more calculations, is 32.8%. This is how it seems to me so far.

(Throughout this whole post, I meant no aggressiveness, imposition, anger, or anything else negative. Just a friendly sharing of ideas. And I may be wrong somewhere and would appreciate anyone's response. :) )
 

cangelini

Contributing Editor
Editor
Jul 4, 2008
1,878
9
19,795
4
[citation][nom]A Bad Day[/nom]Can I run Trinity in CF with a 7970?[/citation]
It will support dividing on-die PCI Express into two second-gen eight-lane links, yes. You cannot, however, do Dual Graphics with a single 7970 (nor would you want to).
 
G

Guest

Guest
So based on what we're seeing Piledriver is a 15% performance boost WITH NO L3 CACHE. I could be wrong but it seems that this could mean a even greater performance gain when the FX series Piledriver is released (even more so given that AMD has taken a strategy of releasing their next gen core without all of the enhancements in a APU first - meaning Trinity is really a Piledriver Light). If we were to figure an average of 25% increase (I think this is doable) it wouldn't be enough to overtake Intel, but given the right price it would at least make me take a second look at AMD when building a gaming machine. As is right now I simply can't justify anything other than Intel in a higher end machine.
 

tomfreak

Distinguished
May 18, 2011
1,334
0
19,280
0
[citation][nom]ElZoido[/nom]Better GPU part (and resulting game perfomance increases) aside, the most interesting part, imho, is getting some first glimps of Piledriver cores.With the addition of an (hopefully improved) L3 - I don't agree with Tomfreak, more cores won't give much benefit in anything besides highly multithreaded applications, which are still rare - they might finally be a replacement for Phenom IIs.It seems that they might finally hit the clock rates necessary to get enough performance from this architecture , which, coupled with the IPC improvements might still not take them to the same level as Intel, but at least make them superior to the previous generation.I never expected wonders from Piledriver, but it might at least be a decent enough upgrade to the underwhelming Bulldozer CPUs.[/citation]u dont need a gigantic size slow L3, they dont help much. An average optimal size L3 is enough, that'll free up some die space for additional cores. Bulldozer is a crap achitechture in IPC. They either need to clock VERY high clock rates or scale more cores to beat Ivy bridge. There is no way u can improment a IPC by that much margin to be competitive against Ivy bridge. Scaling core count is probably the best short cut to win Ivy. Back in those days people are getting a higher clock dual CPU over quad core ones. Look what happened now. Quad core end up the winner.
 

A Bad Day

Distinguished
Nov 25, 2011
2,256
0
19,790
2
[citation][nom]cangelini[/nom]It will support dividing on-die PCI Express into two second-gen eight-lane links, yes. You cannot, however, do Dual Graphics with a single 7970 (nor would you want to).[/citation]

I thought since CCC 10.3, CrossFire supported mixing different architectures.
 
[citation][nom]muy[/nom]i'm an amd fanboy but i think it is pathetic that 2-3 year old amd chips still outperform amd's newest chips WHEN CLOCKED AT THE SAME SPEED. especially in games.give me a phenom II on 32 nm please. i simply can't understand why amd is not doing that.[/citation]

The modular architecture is superior to the older K10 derivative architectures. AMD is simply improving the implementation slowly, stating out with a horrible implementation of it (Bulldozer) and fixing the problems as they move along. Besides, if you take even a Bulldozer FX-8xxx CPU, disable one core per module, then it is already superior to Phenom II in performance at the same frequency because each remaining core has its entire module's resources all to itself.

With Piledriver already being faster per Hz and more power-consumption friendly at the same frequency than Bulldozer, doing the same would probably make it a good deal faster than even Nehalem since the FX-81xx CPUs can already meet or beat Nehalem quad core i5s and even i7s somewhat. A Piledrive 8 core CPU with one core per module disabled could approach even Sandy in performance per Hz.

Many people look at these CPUs and just don't realize just what they're capable of because AMD didn't make it obvious (technically, I think that AMD actually claimed that this was not true, but if so, then they seem to have failed to test it properly)... I guess that you're one such person. Don't judge a book by its cover. If you're really an AMD enthusiast, then you should have looked into this.
 
[citation][nom]gondor[/nom]Highly unlikely. Top Trinity chips (A10) are likely to cost as much as top of the line Llano (~$120, if not more), meaning that by saving $40-50 by going with cheaper dual-core Intel Sandy Bridge Pentium CPU and not wasting 1 GB of system RAM you can afford a HD7750 ($110) in place of HD6670 ($60) that you would use in Trinity Crossfire, and the combination would soundly demolish said Crossfire configuration in games, let alone single-threaded workloads which seem to be quite common in Tom's testsuite.Crossfire configuration defies the very idea behind APUs. If you want to use discrete graphics to speed things up, go discrete all the way (plus you get much better scaling with upgrades).[/citation]

Actually, a 6670/7670 plus an A10 would probably beat the 7750 considerably. Furthermore, a Pentium has no chance against an overclocked APU, especially a Trinity APU.

[citation][nom]youssef 2010[/nom]Incomplete, as it always depends on the application. I totally appreciate AMD's APU initiative and I respect what they're trying to achieve. But I can't help thinking that Intel suits our needs better by offering a class of CPU performance way beyond AMD's current capabilities.As a side note. AMD was able to achieve the exact 15% IPC improvement over Bulldozer so, I don't see why they couldn't have aimed a bit higher and give the current Intel CPUs a run for their money.[/citation]

AMD could have aimed for much higher... However, they didn't. I would assume that to them, going over their promised 10-15% performance increase per generation might be as bad as going under it. With how many things are wrong with BD and BD based CPUs right now, AMD could probably have aimed for a several times higher gain if they wanted to. Considering that AMD went backwards on the memory controller, they could have had similar gains in graphics performance too just by making a memory controller at least almost as good as Intel's memory controller in SB. That should grant a more than 15% jump in graphics performance strictly from the large jump in bandwidth.

AMD literally could just fix the memory controller and/or give the IGP some serious cache and make a one or two generation jump in performance while reusing the same IGP. AMD could have probably more or less doubled CPU and GPU performance in just one generation because of how many things are wrong with their CPUs. Their CPU architectures are just as good, if not better than, Intel's and their GPUs are far better, but AMD is shooting themselves in the foot repeatedly in what I think is an incredible amount of ways.
 
Status
Not open for further replies.

ASK THE COMMUNITY