AMD Trinity Desktop Chip Schedule Challenges Mobo Makers

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

tomskent

Distinguished
Nov 25, 2010
177
0
18,690
"If it can reach anywhere near 80-90% performance of 2600k at half the price, I'll get one."

2600k = $300...so Trinity for $150?

sadly, that doesn't seem very realistic
 

iamtheking123

Distinguished
Sep 2, 2010
410
0
18,780
Bad move on AMD's part. Delaying means that they're getting closer to Haswell's release. Close enough at least to make people "wait and see". And Haswell *will* give Trinity a run for its money on graphics. Core performance and power consumption are already lost causes for AMD.
 

ammaross

Distinguished
Jan 12, 2011
269
0
18,790
[citation][nom]archange[/nom]I do believe that Heise Online should be capitalized, since it's the name of a media outlet.[/citation]
It's also German, so all Nouns should be capitalized anyway...
 

danwat1234

Distinguished
Jun 13, 2008
1,395
0
19,310
I think now days CPUs are becoming powerful enough for most anyone besides someone who doesn't like to wait for videos to encode or huge programs to compile or the gamer who demands top notch graphics. AMD chips are at a point where they are powerful for most anything and so what if Intel's CPUs are 30-
50% faster, AMD won't be that far behind and 5 years down the road a new AMD chip I'm sure will be much faster than Sandy Bridge.
I guess it's just a matter of how long you want to wait to get the performance level you need to make you happy for all your computing needs. I think nowdays we can forecast what sort of things we want to do on our computers in the future and so you know what hardware will satisfy. Maybe Intel is the only one that has the performance level you want right now, so you have a choice of going that route or wait a while for Piledriver revision 1..
 
G

Guest

Guest
Originally, I had planned to build a Trinity based micro system but, with this news I'll build a Llano based system instead. At least that way I can avoid the BIOS/Driver teething issues that are sure to come with bleeding edge technology. Once that storm clears I'll upgrade to the Trinity setup.
 
[citation][nom]Soul_keeper[/nom]I wish these APUs would just use the same socket as their real desktop CPUs ...Seems like that'd make things a lot easier on motherboard makers, and consumers wanting an upgrade path.[/citation]

Not possible. The desktop CPU sockets don't have the necessary video output pins.

[citation][nom]bustapr[/nom]that cant happen because the APUs and the bulldozer CPU are totally different architectures. if they were to make the APU fit on an AM3+ socket, theyd have to remove the IGP completely, defeating the purpose of the APU. AM3+ is made to support bulldozer which has no IGP in the die.[/citation]

This, but architecture has little to do with it. It's really just the IGP that stops the APUs and CPUs from sharing a socket. Theoretically, AMD could design the AM4 socket to have latent video output pins so that the AM4 socket could join the two platforms, but it can't happen until at least AM4 and that probably won't happen until DDR4 is out. Interestingly, DDR4 should also solve the memory bandwidth bottle-neck problem with AMD's APUs.

[citation][nom]eddieroolz[/nom]But enthusiasts don't want to buy Trinity anyway... (@last sentence)[/citation]

Until Piledriver CPUs (Vishera) replace Bulldozer CPUs (Zambezi), Trinity could be the most powerful gaming platform for AMD CPU-wise. I bet they would overclock like mad if you used them as a CPU with the IGP disabled and a good discrete graphics card or two connected. Kinda like how Llano's CPU power usage drops like a rock if the IGP is disabled, leaving a lot of room for overclocking compared to even AMD's Phenom IIs.

The only other thing from AMD that could challenge Trinity in this sense would be an FX-8120 with one core from each module disabled, giving the entire module's resources to a single thread, even if it means that only four cores are enabled. This has been shown to improve per-core performance by up to about 20% (it's far more effective than the Windows 7 FX patches) and it would leave the FX-8120 using less power than the FX-4100 due to their superior binning even though they would be significantly faster than it. Overclocked, an 8120 modded in such a way might be worth it's current $170 price point at newegg.

[citation][nom]iamtheking123[/nom]Bad move on AMD's part. Delaying means that they're getting closer to Haswell's release. Close enough at least to make people "wait and see". And Haswell *will* give Trinity a run for its money on graphics. Core performance and power consumption are already lost causes for AMD.[/citation]

Now this is where I see the worst problem with launching Trinity in retail so late. Haswell would be creeping up on it. However, unless Haswell at least doubles the HD 4000 in IGP performance, it won't beat the top desktop A10s in IGP performance. Even the Llano A8s still beat HD 4000 significantly, let alone the Trinity A10s. Not only would Haswell need to double performance of the HD 4000, but Haswell's low end IGP would need to have double the performance of HD 4000, not just Haswell's high (in this context) end IGP. That would be an enormous jump because it would need to be something like 3 to 4 times faster than the HD 2500. Unlike on the mobile side, AMD is utterly killing Intel in IGP performance as Intel utterly kills AMD in performance per core.
 

tomfreak

Distinguished
May 18, 2011
1,334
0
19,280
[citation][nom]styrkes[/nom]"get rid of its pile of Llano"but who wants them? I want Trinity.[/citation]by sellling another 50% discount price on top of the current price?

if it is I might get it too.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
I was looking at that long comment above addressed to multiple comments, and after going through it, no surprise, blaz! Haha! I don't really expect anything less of you and you're turning me into a fanboy of yours. Don't expect me to be biased with my comments though! Hehehe...

Getting back to something more relevant, I found your proposition (is it your own?) of disabling one module on an FX-8120. Is it a 20% performance increase per core or module? Would this yield more performance all in all compared to a fully active 8120? The thing is, I'm not sure if you meant that it would just be more power efficient. If it's the former which you seemed to imply, I'm wondering how deactivating an ALU (integer unit, are they pretty much the same, please tell me if so) would benefit performance, unless it somewhat like how Hyper-threading could reduce performance in some situations. Please enlighten me/us. :)
 
[citation][nom]army_ant7[/nom]I was looking at that long comment above addressed to multiple comments, and after going through it, no surprise, blaz! Haha! I don't really expect anything less of you and you're turning me into a fanboy of yours. Don't expect me to be biased with my comments though! Hehehe...Getting back to something more relevant, I found your proposition (is it your own?) of disabling one module on an FX-8120. Is it a 20% performance increase per core or module? Would this yield more performance all in all compared to a fully active 8120? The thing is, I'm not sure if you meant that it would just be more power efficient. If it's the former which you seemed to imply, I'm wondering how deactivating an ALU (integer unit, are they pretty much the same, please tell me if so) would benefit performance, unless it somewhat like how Hyper-threading could reduce performance in some situations. Please enlighten me/us. :)[/citation]

Disabling one integer core/ALU in each Bulldozer module improves the performance of the remaining core by up to about 20%. This is because it gives that one core all of an entire module's resources and allows more aggressive Turbo frequencies to be used. This lowers highly-threaded performance because you basically have half the cores, each core is just about 20% faster than they used to be. Highly threaded performance, at best, would be less than 5/8 of what it is when all 8 cores are active, but single, dual, triple, and quad threaded performance would improve and for gaming and most other consumer workloads, that is often more important than performance with 8 or even 6 threads.

So, the module's performance drops, but the performance per core increases. Basically, you're taking a module that has four X86 decode paths, two 128bit FPU threads or a 256 bit FPU thread (interchangeable), 2MB of L2 cache, and more and instead of sharing this among two cores, a single core now has all of it. In use, it's kinda similar to how HTT could hurt performance in workloads that aren't designed for it, but the actual problems behind its behavior are a little different.

The power usage drop would come from the fact that you now have half the cores, so the CPU isn't using as much power. I'd say that this drops power usage to around that of the FX-4100 while letting the specialized 8120 be considerably faster than the 4100. The 8120 is also binned better, so it should overclock better than the 4100.

Now this next thing might not be have a huge impact, but you also have the 8120's active cores basically strewn about the CPU instead of focused in two modules, so it might not only have lower heat density at the same power usage as the 4100 (spreading about the same around of heat over four modules' worth of die space as the 4100 does with only two modules) and thus be more stable due to the better binning, but also heating up slower due to the heat being spread over a larger area. I don't think that it will make a huge difference, but theoretically, it should at least help a little.

Heck, if possible, you could probably even disable some of L3 cache without taking a performance hit due to the huge amount of cache that each core now has, further decreasing power usage slightly, improving power efficiency slightly, and maybe allowing slightly higher overclocks.

FX-81xx has a lot of apparent advantages that I'd like to explore in further detail.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
[citation][nom]blazorthon[/nom]Highly threaded performance, at best, would be less than 5/8 of what it is when all 8 cores are active, but single, dual, triple, and quad threaded performance would improve and for gaming and most other consumer workloads, that is often more important than performance with 8 or even 6 threads.[/citation]

I didn't think of this even though game benchmarks have shown this to be true time and time again. This idea is very interesting. If only AMD or a mo-bo manufacturer would allow these kind of controls/settings. Then I'd like it put to the test. Are these propositions of yours theoretical for now or have they been proven? :)

[citation][nom]army_ant7[/nom]Getting back to something more relevant, I found your proposition (is it your own?) of disabling one module on an FX-8120[/citation]

I just want to correct myself, I meant "core" not "module" got them mixed up for a moment there. Hehehe...
 
[citation][nom]army_ant7[/nom]I didn't think of this even though game benchmarks have shown this to be true time and time again. This idea is very interesting. If only AMD or a mo-bo manufacturer would allow these kind of controls/settings. Then I'd like it put to the test. Are these propositions of yours theoretical for now or have they been proven? :)I just want to correct myself, I meant "core" not "module" got them mixed up for a moment there.[/citation]

This performance boost is a proven phenomenon. I'll look through my links and see if I can find some of the demonstrations of this (I've read two reviews that demonstrated this). Also, some motherboards allow it. I don't remember any particular board models that do, but I know that some Asus boards can do it.
 
Status
Not open for further replies.

TRENDING THREADS