AMD's A10-5800K Trinity APU Overclocked to 7.3 GHz

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]vittau[/nom]I'm not sure of the viability of this, but I just had this idea of a "gatling" processor:Suppose you have 20 total cores, but only keeps 4 active cores at any given time.
....
It would also be scalable depending on applications (no. of active cores X clock).We are reaching that point where we have to think of EVERYTHING we possibly can.[/citation]

I like your idea, but the biggest drawback is software multithreading. Someone will have to make software developers to thread their programs. Most program are one or two threaded right now, especially games.
 

I enjoyed reading your idea. For all we know, it's worth patenting. Hehehe... Some other things to consider though may be that your cooling solution should be good enough to cool down the previously heated up cores in time for their next turns. Also, if there would be any problem with heating and cooling the cores continuously (if the materials they're made from wouldn't degrade this way).

If you have enough active threads running, then maybe all the cores could run at a lower clock that wouldn't overheat them and cause them to switch out. But for application like single-threaded or lightly-threaded applications they could be clocked pretty high that even if they heat up quickly, they would have a lot of other cores to switch out with. You would need some sophisticated algorithm for this system of core management to work (including one that may need to take voltage into account), though it may not be impossible. :)
 

Disabling the on-graphics doesn't make too much sense, considering Trinity can already clock up to 4.2GHz. There isn't a lot of headroom.
What I'd like to see is games evolve to utilize OpenCL. That way when someone does add-on say a 7850, the 7850 could do all the heavy lifting with the in-game graphics and the on-die GPU could assist the x86 cores via openCL (or DirectCompute even), or maybe be used a physics processing unit or something similar to PhysX.
 
[citation][nom]mysteoa[/nom]I like your idea, but the biggest drawback is software multithreading. Someone will have to make software developers to thread their programs. Most program are one or two threaded right now, especially games.[/citation]Yes, but I guess this is the same problem we have right now too. I don't see how this problem is any bigger with my suggested design.
The processor would adapt to the single-core application, by shutting down other cores and increasing the clock frequency.

[citation][nom]army_ant7[/nom]I enjoyed reading your idea. For all we know, it's worth patenting. Hehehe... Some other things to consider though may be that your cooling solution should be good enough to cool down the previously heated up cores in time for their next turns. Also, if there would be any problem with heating and cooling the cores continuously (if the materials they're made from wouldn't degrade this way).If you have enough active threads running, then maybe all the cores could run at a lower clock that wouldn't overheat them and cause them to switch out. But for application like single-threaded or lightly-threaded applications they could be clocked pretty high that even if they heat up quickly, they would have a lot of other cores to switch out with. You would need some sophisticated algorithm for this system of core management to work (including one that may need to take voltage into account), though it may not be impossible.[/citation]
You bring some valid points.
I think the thermal shock problem can be worked-around with some intelligent switching, trying to make sure the temperatures are kept with low variance.
But I have no idea if we'd be able to cool down the disabled cores in time for their next turn...
 
[citation][nom]jerm1027[/nom]Disabling the on-graphics doesn't make too much sense, considering Trinity can already clock up to 4.2GHz. There isn't a lot of headroom.What I'd like to see is games evolve to utilize OpenCL. That way when someone does add-on say a 7850, the 7850 could do all the heavy lifting with the in-game graphics and the on-die GPU could assist the x86 cores via openCL (or DirectCompute even), or maybe be used a physics processing unit or something similar to PhysX.[/citation]

There's plenty of headroom. Disable that IGP and you disable more than half the chip. That'll cut power consumption greatly and greatly increase thermal headroom.

I'd like to see OpenCL doing that too (I've even brought it up before), but simply disabling the IGP for more thermal headroom is doable now without developer intervention.
 
APU's are very overclockable, this is due to them being highly underclocked in the first place. The TDP headroom must be able to handle all four cores and the iGPU going full blast. Now rarely will this actually happen (a few high end games come to mind) and so a hobbyist can utilize that additional headroom to speed up the chip.

EX:

I have an A-8 3550MXin my travel notebook. 2.0Ghz four core K10.5 (each core has 1MB of L2 vs the typical 512KB). I was able to get all four cores going at 3.0Ghz though the chip got to 70+ celcius, I would never want to run it that hot full time. In single threaded apps I was able to get a single core at 3.0Ghz with the other four at 800mhz and the iGPU supplying the graphics. It required K10stat and knowing how to utilize affinity bits.

Something like an A10-5800K should have an even greater overclock range.
 
First off if you cannot appreciate the achievement then perhaps you should even bother posting in this article.

Second off its a APU, comparisons to Intels $220 and near $400 chips let alone AMD's own Phenom II and FX line is pretty stupid considering the PD module in FX chips to come is far stronger, add to the fact that people here seem incapable of comprehending the fact the APU is not designed on X86 performance but parallel computing between X86 and GPU, the results in that department even show up intels vastly more expensive options.

Did anybody not that the Trinity APU scored 1.08 single thread over the stars based llano's 0.89 in Cinebench, it was also higher than the 1.02 scored by a FX 8150, one can only expect that the FX parts based on PD architecture will score roughly 55% higher.

The day and age of legendary X86 performance is coming to an end, this realization will sink in with massive news next year regarding all kinds of prominant software developers and gaming developers revealing moves to HSA.
 
[citation][nom]sarinaide[/nom]Did anybody not that the Trinity APU scored 1.08 single thread over the stars based llano's 0.89 in Cinebench, it was also higher than the 1.02 scored by a FX 8150, one can only expect that the FX parts based on PD architecture will score roughly 55% higher.[/citation]

Eh? In a single-threaded benchmark, I think the 83xx series will perform almost identically. Expect the multi-threaded benchmark to be in the 6.5 to 6.7 range (not a massive gain, but it's a mild architectural update with a more mature process, and half the gains are from a higher clock anyway).
 


Not in OpenGL 😀



 


You're just mad because the Cedar Mill Celly is still the highest-clocked Intel processor :lol:

 
Wisecracker

Forgive me, but what is going to change with the Piledriver architecture to allow a large boost in OpenGL performance? If you can point to an article it'll be much appreciated. :)

(FWIW, Scali doesn't seem to think much of Steamroller...

http://scalibq.wordpress.com/2012/09/26/amd-steamroller/ )
 

ahh man why you call me out like that bro.? :lol: 😗
 

@proffet
For some reason, I think the Piledriver FX's may perform a lot more than "almost identically" to the Bulldozer ones in single-threaded performance (though you guys were talking about Cinebench so I may not know what I'm talking about). :)

@Wise cracker
Like silverblue said, why OpenGL performance? :)
 
if the single -threaded performance of Piledriver is better than Bulldozer (for one that's not saying much)
that means it's equal to slightly better than Phenom II X4 (Deneb).
which is a notch still below Intel - Nehalem (i5-760)..

I'm just saying....
oh, Cinebench breaks it all down for you single thread wise..

 
Oh! I see... What I got wrong was that you were comparing Piledriver to Stars/Husky, not Piledriver to Bulldozer. Alright, got it. Sorry and thanks! :)
 
I don't know when *it* actually happened, but AMD/ATI has been steadily improving OpenGL performance over the last 3-4 years, and Trinity seems to bring everything together.

I don't know why or how :lol: but it's a combination of chip logic and arch, drivers and the API itself. Implementing OpenGL in hardware and software by its (open) source nature right now has advantages over 'DirectX' in compute, but you can be sure that MS is fighting back with refinements to DirectCompute11.

If you check out the OpenGL scores for the Trinity APU in Cinebench, you start to see significant evidence of this. The Turks cores in combination with the Piledriver cores just work well. This is a big deal. An integrated SIMD engine on the APU is blowing past multi-core CPUs with significant discreet graphics.

If you re-check the Toms OpenCL/GL article (where they interviewed folks from Corel, Adobe, etc), one of those guys used calls to those Turks APU cores via OpenGL. I'm not smart enough to define the intricacies of coding OpenCL/GL in this fashion, but AMD Trinity APUs via hardware seem to work well at it - and they are only half-way through their unified memory *HSA* ultimate hardware design.

Check out Sony Vegas with the FirePro V4900. I don't know what part OpenCL/GL plays in the accelerated encoding, video FX, transitions, compositing, pan/crop, and track motion, but the impact is significant over and above the pretty dang good multi-thread performance of the Bulldozer CPU cores.

😀 I forgot to mention that the FirePro V4900 has the same 800MHz Turks core clocks and 128-bit interface as the Turks SIMD engine cores on the Trinity APU. The difference is 480<-->384 Turks shaders, or using desktop discreet as an example the Turks HD6670<--->HD6570.

As far as BD to PD improvements, it's typical of AMD in a die-shrink (see: original Phenom 45nm). From original stepping, they make a 15-20% leap in performance and efficiency when they refine the arch. The changes on Piledriver are almost too numerous to list: logic, split bi-directional power between the APU x86/GPU 'SIMD' cores, the UNB ('unbridge'), IOMMU v2, smoothing PLL *jitter* etc.

And that kinda ties back in to the thread topic 😱 when AMD makes a big leap in efficiency, some folks can use it (and some LN2) to clock 7.3GHz ...



edit: I messed up your quotes and just slashed 'em out - LOL - hope this makes sense



 
@Wisecracker
We thought you were referring to OpenGL performance getting noticeably boosted because of the CPU architecture (Piledriver), when after all, you were referring to the GPU component of the APU (Trinity). That's why we were wondering. Hehe...

Thought that's some nice info to know, that they're improving their OpenGL implementation in their drivers (I think that's how it works.) Wanna read something about that but on gaming? 😀 Not sure if you've read this but it wouldn't hurt to share it. Faster Zombies!
 
And which of you bois can afford a liquid nitrogen cooling solution?

Intel and AMD are both working on their respective weak areas, and both making great advances in APU's.

If someone really bought an A10-5800K for gaming they would probably spend the money saved over an Intel for more important things, like a larger display (By some theories here this APU is for those switching from console to PC) or more RAM, or some other upgrade.

Or they may just wait a bit and see what manufacturers release next (options not requiring liquid nitrogen cooling for high clock numbers)

I am happy with my newest AMD powered laptop for gaming (casual or otherwise) with only a few games requiring lowering settings more than I would like to play at 1920x1080 on an external monitor.

Of course a discreet GPU card helps in that respect... as it will for any CPU at this time.

The point here is if heat is the only limit to clock speed for new AMD APU's that is a major improvement over other causes of clock related instability.

On the other hand, clock speed does not make a CPU or APU king anymore... it's instructions per clock cycle x clock ticks per second, and then there's cache thrashing and data fetch times. To an extent GB/s memory access has importance if the CPU/APU can process data faster than it can be delivered.(or as fast)

I have to agree though, people will buy the processors compared for differing reasons, so a direct comparison as equals won't make much sense.

I encode a lot of video, and if i were limited to my CPU I would not be happy, but given GPU acceleration things change dramatically.

Cyberlink Media Espresso is reasonable with just the CPU, but blisteringly fast with GPU accelleration turned on, much like the differences in GPU limited games, so I can appreciate how many of you say adding a discreet GPU to the A10 would make it so much better (but others are right that if you throw the same GPU at the Intel CPU it will blow away the AMD listed here with ease, though for a bit more in out of pocket cash)

It really depends on what you intend to do with the A10 or I5 processors...


 
Status
Not open for further replies.

TRENDING THREADS