AMD Piledriver rumours ... and expert conjecture

Page 149 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
somewhat. If Intel is supporting the developer, and staffing them with a few software engineers, do you think they will allow that developer to compile with that option?

http://software.intel.com/sites/billboard/article/blizzard-entertainment-re-imagines-starcraft-intels-help

AMD is not in that "largest group of systems on the market today"
looks like intel (like nvidia) gets deep into game development with other companies. amd doesn't do that, usually.
i noticed that in this specific case, starcraft uses havok - owned by intel. it's kind of a no-brainer that intel would push software optimization for their own hardware. say, intel put the compiler fix in havok but forgets to turn it on. i don't think they're really breaking the law or explicitly sabotaging amd. according to court order, intel put in the compiler fix in the software (which is turned off by default) and leaves as it is, since havok belongs to intel.
i've noticed that it almost always turns to games but there are other softwares people use.
 
Stalker may be optimized for AMD, but was it crippled for Intel systems in the same way SC2 was crippled for AMD?

http://downloadsquad.switched.com/2010/01/04/intel-forced-to-provide-a-compiler-that-isnt-crippled-for-amd-processors/

What software vendor in any industry would cripple their software for 80% of the market. When you make the program run equally on both sets of hardware, guess what? Intel is only ~10% faster.

But no one wants to see that, they want to see massive differences that SC2 show. Ultimately thats Intel's agenda. Ignore all the programs that run similar, and only look at software that Intel pushed, either with their compiler or with the Havok engine.
 
and yet here we have a review site showing just the opposite

41709.png


 
LOL, really:

http://www.bit-tech.net/hardware/cpus/2011/10/12/amd-fx-8150-review/10
bit tech ... rofl.

Note: the AMD chips were tested in an ATX motherboard, while the Intel LGA1155 chips were tested in a micro-ATX board. This difference can account for up to 20W,

and their overclock ... rofl ... lets see how far we can push power draw, CRANK THE VOLTAGES, ALL OF THEM. lets burn this thing up.
 
then why is it that AMD cpus are deemed sooo horrible if thats the case?
They aren't, in this case. I've said it before, once the cpu becomes the bottleneck, the game is perfectly playable anyway. Games are not a good place to have an argument about cpu power.
don't think I'm gonna make wait for Piledriver...
gonna see the line of Ivy's and check out the i5-2400 version of Ivy i5-3550 to replace my AMD unit..

the break-up with AMD continues.. 🙁
You might want to wait to see some Trinity benchmarks, at least the notebook ones. I doubt it will be extremely spectacular, but it may bode well for PD. Unless your just not up for the wait, which will still be quite a while.
 
Well i thought that was a past thing and was actually resolved.... :??:

I guess certain people hold on to the past, and cannot progress because their mentality will not let them:/

Choose your words carefully, certain people will not tolerate anyone trying to turning off the evil Intel
propaganda machine.
 
I think you guys are not understanding each others point and are arguing your own points over and over again, lol.

Intel side: AMD does poorly -> benchmarks to prove it -> fact put and accepted (at least by me, lol).
AMD side: Intel does not give Devs/OEMs room to make AMD look better and cripple them in very dubious ways altering fair and square competition -> shows several FTC proof and explain the benchmarks of why AMD is behind by a wide margin -> does not accept (or is it accepted but still pushing the other argument).

That makes this thread go around in circles till a new tid bit of information about Trinity or PD comes around and we start it over again, lol.

Anyway, I still consider BD to be a very expensive side grade for me, but still recognize the boldness of the new design. I still think AMD screwed up trusting Intel in the FMA4 and not PUSHING early development of software that actually supported BD out of the door. Hell, not even their own bloody compiler supported BD fully at launch (not optimized as it should been). That's just dumb.

Cheers!
 
don't think I'm gonna make wait for Piledriver...
gonna see the line of Ivy's and check out the i5-2400 version of Ivy i5-3550 to replace my AMD unit..

the break-up with AMD continues.. 🙁

LOL - pretty much as predicted by me and Earl45 in the Haswell thread:





😗
 
Again, you fail to differentiate between the optimization and code gen stages of compilation. I read the settlement as meaning a failure to optimize.

Really simple to test though: take a piece of code, manually insert an SSE instruction, and compile with all optimizations disabled. Then compile with optmizations enabled.

If what you argue is true, then with no optimizations, the only code path would be the SSE instruction, and the app would instantly crash within the CPU dispatcher.


You didn't do your home work since last time.

Agner already did exactly that, it doesn't work. I can tell your a HLL guy.

The Intel Compiler compiles any code multiple times, then inserts it's dispatcher into the front of that code. When you execute that code the dispatcher will check your CPUID / VendorID and use that to determine which code path to use, this is done at run time. No matter what you do to your code the compiler will always generate multiple paths and one of those paths will include i386 / x86 instructions only (no SSE / MMX / FMA / AVX). You can manually insert a SIMD instruction using ASM and the compiler will just replace it with a i386 equivalent on the generic i386 path. Thus even if the software developer manually inserted ASM into every aspect of their code, the Intel compiler will just override that during compile time. You will get two sets of code, one with your own optimizations and one with the generic i386 optimizations.

Thus Intel's compiler was determined by the FTC to be a defective product and inserting unwanted artificial performance impairments into the products of it's customers. AMD and Intel were told to settle their disagreements or the civilian courts would do it for them, thus the CLA was made and Intel agreed to not introduce artificial performance impairments into it's compiler for AMD cpus. The FTC then separately told Intel they couldn't do that to ~anyone~ and to stop using unethical business practices (referring to the OEM deals), if Intel didn't fix their issue the FTC would take it to the justice department.

This was all with the older Compiler prior to 2010, the one made after 2010 instead use's SSE2 in it's generic instruction path. Meaning if you hand coded SSE4.2 ASM and compiled it, the Intel Compiler will generate two instruction sets, one with SSE2 only (recoding your SSE4.2 optimizations) and one with the SSE4.2 capable path. You would have to patch the binary afterwards to remove the VendorID check from the dispatcher. Or you can force the compiler to only generate one code path and remove the dispatcher completely, this introduces the issue where you've limited the compatibility of the code similar to what GCC and MSVC have to deal with.

*Note*
The Intel Compiler actually generates more then one code path, I restricted the statement to the two that mattered the most, the generic i386 one and the optimized one.

Any SIMD operate can be done by a sequence of integer operations instead. The SIMD instruction is vastly more efficient as it allows those operations to be done in a few cycles vs the 40~50+ cycles. Thus it is easy for a compiler to do code substitution, in fact every modern compiler does this anyway in an attempt to optimize your data and instruction segments.
 
AMD took legal action in 2005 agains the compiler, Court ordered Intel to fix it in 2010. Intel fixed it with this command

/QxO

Enables SSE3, SSE2 and SSE instruction sets optimizations for non-Intel CPUs

so in order to enable it for AMD cpus, you have to know to use that option. By default its still disabled.

Not exactly. That feature isn't used often because it has one severe side effect, it defines the lowest level of instruction compiled. Meaning if you set it to SSE3 it will NEVER run on anything SSE2 or lower. It's not enabling those instructions it's just defining to what level the generic path is compiled with. Software industry is ruled by lowest common denominator, they want software that will run on the widest possible selection of CPU's, thus using that switch immediately limits the CPU's you can run on.
 
don't think I'm gonna make wait for Piledriver...
gonna see the line of Ivy's and check out the i5-2400 version of Ivy i5-3550 to replace my AMD unit..

the break-up with AMD continues.. 🙁

As an enthusiast, AMD really does not deserve your business or mine. I have the 2600K. It is my first Intel CPU.
 
Hurting much Yuka? Here is some ibupro...

But make it a generic brand at least, I'd like to avoid the ibupro in the blue package, hahaha.

I'm not that hurt though, since the MoBo has been excellent in so many ways for its price and I can still hope for PD to be an upgrade. Even if it is small, I'll take it, lol. It will be cheaper than building a whole new Intel rig (mal and I already put up some numbers for that, hehe).

Not exactly. That feature isn't used often because it has one severe side effect, it defines the lowest level of instruction compiled. Meaning if you set it to SSE3 it will NEVER run on anything SSE2 or lower. It's not enabling those instructions it's just defining to what level the generic path is compiled with. Software industry is ruled by lowest common denominator, they want software that will run on the widest possible selection of CPU's, thus using that switch immediately limits the CPU's you can run on.

Wait, but weren't compilers supposed to "patch" the code for that? I mean, if you CPU doesn't have it, then use the other ASM combo for it, right? Or you're still talking about the dispatcher model? Or I just got horribly confused? hahaha.

Cheers! 😛
 
Status
Not open for further replies.