AMD Piledriver rumours ... and expert conjecture

Page 113 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
I will say one interesting thing though, in the games that are Intel optimized Intel has a major advantage but in the games that are not Intel optimized but rather AMD optimized Intel still keeps up with AMD. Strange really.

Ok we need to stop this. There is no such thing as "Intel Optimized" or "AMD Optimized" unless we're talking FMA / XOR which we're not. The only difference between them is the compiler and subsequent libraries used, otherwise it's the same instructions. You can tweak certain functions to get a few cycles here and there, but will typically result in less then 2% total performance at the end of the day. If a company is using the Intel compiler then the produced code will not use the extended instruction sets on the AMD CPU, instructions sets which are identical to what the code will run on the Intel CPU. If you could change the CPUID on the AMD CPU you would suddenly get higher performance. If a company is using the GCC or MS compiler then it will do a proper CPU detect and chose the code path that match's the CPU's capability.

Thus you either produce code that works well on Intel and sh!t on everything else, or you produce code that works well on everything.

But what you just stated is what people are saying and is optimizing. Optimizing is taking advantage of the hardwares specific instruction sets or abilities, much like PhysX, SSE, FMA, CUDA, Quick Sync etc.

Thats what they are stating. I honestly was pointing out that I don't think it will make a major difference and is why I find it strang that games backed by AMD don't outperform Intel CPUs by much if not at all. Its just an observance but to say there is no such thing as optimizations for specific CPUs, thats just a false statement.
 
So, you think it's a joke? There were a few news reports some months ago about an 8 core Phenom II but those reports said it was a reduced frequency Bulldozer line of CPUs.

That would make no sense. It would be taking a brand that is known for one thing and making it something less. Wait they did that already. The FX line was known to be top of the line...

Still IF it was true:

file.php


That is not a BD die. That looks more like a 8 core CPU, not a module approach. Then again hard to say but compared to a BD die:

bulldozer-die.jpg


It just doesn't look the same.

Add to the fact that its supposed to be 494mm^2 and it has the same L2 as well as L3, it has to be full blown cores, not modules.

Still I doubt it since there has been nothing on the roadmaps to back this up nor anything official from AMD. And why would they use the 45nm process when GF is probably moved over to 32nm?
 
But what you just stated is what people are saying and is optimizing. Optimizing is taking advantage of the hardwares specific instruction sets or abilities, much like PhysX, SSE, FMA, CUDA, Quick Sync etc.

Thats what they are stating. I honestly was pointing out that I don't think it will make a major difference and is why I find it strang that games backed by AMD don't outperform Intel CPUs by much if not at all. Its just an observance but to say there is no such thing as optimizations for specific CPUs, thats just a false statement.


Umm no. No such "optimizations" are happening.

Its the same code, unless you want to say coders are "Optimizing" for my Via Nano and they just made a mistake.

Only difference is the dispatcher from the Intel Compiler is hardwired to ignore CPU feature flags on any CPU with AuthenticAMD or Via as it's CPU Vender ID. If the Vender ID is set to Intel it then query's the family ID and compares it to a hard-coded list of Intel CPU feature sets. If the CPU isn't in the list it then query's the flags and determines from there what code path to use. It will never query the feature flags of AMD or VIA CPU's. If you set your Vender ID to Intel the dispatcher will then utilize the SSE and other extended x86 instructions. Neither GCC nor MSC do this, they query the feature flags and determine the code path from there, absolutely no vender preference is shown.

A better way to describe it is that the Intel Compiler doesn't optimize for Intel, it just refuses to run it's already optimized code on AMD or Via. You can even patch your code in such a way as to disable the Intel dispatcher and force it to query feature flags all the time which completely disables the "gimp AMD/VIA" code. If you actually read what Agner wrote about this then you wouldn't be making these statements and trying to defend Intel's behavior. He's done some rather in-depth testing and research, compiled his own code and ran it on his own Via CPU with different vender strings and analyzed the performance for benchmarking. It's rather telling how bad Intel is here. Exact same set of code that changes it's behavior to be prejudiced on what CPU's its run on. It's not that the code isn't optimized, it's that it refused to run those optimizations.
 
Umm no. No such "optimizations" are happening.

Its the same code, unless you want to say coders are "Optimizing" for my Via Nano and they just made a mistake.

Only difference is the dispatcher from the Intel Compiler is hardwired to ignore CPU feature flags on any CPU with AuthenticAMD or Via as it's CPU Vender ID. If the Vender ID is set to Intel it then query's the family ID and compares it to a hard-coded list of Intel CPU feature sets. If the CPU isn't in the list it then query's the flags and determines from there what code path to use. It will never query the feature flags of AMD or VIA CPU's. If you set your Vender ID to Intel the dispatcher will then utilize the SSE and other extended x86 instructions. Neither GCC nor MSC do this, they query the feature flags and determine the code path from there, absolutely no vender preference is shown.

A better way to describe it is that the Intel Compiler doesn't optimize for Intel, it just refuses to run it's already optimized code on AMD or Via. You can even patch your code in such a way as to disable the Intel dispatcher and force it to query feature flags all the time which completely disables the "gimp AMD/VIA" code. If you actually read what Agner wrote about this then you wouldn't be making these statements and trying to defend Intel's behavior. He's done some rather in-depth testing and research, compiled his own code and ran it on his own Via CPU with different vender strings and analyzed the performance for benchmarking. It's rather telling how bad Intel is here. Exact same set of code that changes it's behavior to be prejudiced on what CPU's its run on. It's not that the code isn't optimized, it's that it refused to run those optimizations.

As I have said before, Intel is not responsible for making sure those CPUs are running to their peak, the CPU vendors/software devs are.

You cannot expect Intel to write a compiler for everyone elses hardware its just not competitive. I don't expect AMD to do it either. If AMD wrote a compiler and it only utilized what was availible in a AMD CPU, then thats what I would expect.

And still, its in a technical sense optimizing. While Intels compiler is not even used that much at all, utilizing feature sets available in a CPU is in a way optimizing it for said hardware. If it was set to use FMA or QS, its optimizing the program to utilize said hardware.
 
As I have said before, Intel is not responsible for making sure those CPUs are running to their peak, the CPU vendors/software devs are.

You cannot expect Intel to write a compiler for everyone elses hardware its just not competitive. I don't expect AMD to do it either. If AMD wrote a compiler and it only utilized what was availible in a AMD CPU, then thats what I would expect.

And still, its in a technical sense optimizing. While Intels compiler is not even used that much at all, utilizing feature sets available in a CPU is in a way optimizing it for said hardware. If it was set to use FMA or QS, its optimizing the program to utilize said hardware.

Wow ... mental gymnastics. You'd win the gold in that event.

Your talents are being wasted at a computer store, you need to apply for a position in a legal department.

And again, it's not about Intel optimizing, its about them deliberately disabling extensions on their competitors CPU's. This isn't for you to argue, Intel admitted to it in court and has signed an agreement not to do it in the future. Took them over 2 years to remove the block on their code for SSE2 instructions for non-Intel CPUs and they were forced at legal gun point to do it. Even now their most recent compiler will only allow SSE2 instructions on other CPU's while using SSE3, SSE4 and AVX on Intel branded CPUs.

And all those games that Intel provides SDK's and support for? Guess what, their using (most of them) the Intel compiler for their programs. If you want to see this just run those programs on a Via Nano platform. Benchmark them, then change the CPUID to Intel and benchmark them again. *BAM* instant increase in performance, all you had to do was trick the program into thinking it was running on an Intel CPU.

And if you can't see that as wrong and unethical, the deliberate disabling of features without user consent (and previously without customer consent), then you've become as bad as the iSheeple.
 
That would make no sense. It would be taking a brand that is known for one thing and making it something less. Wait they did that already. The FX line was known to be top of the line...

Still IF it was true:

http://www.planet3dnow.de/photoplog/file.php?n=19534&w=o

That is not a BD die. That looks more like a 8 core CPU, not a module approach. Then again hard to say but compared to a BD die:

http://hothardware.com/articleimages/Item1742/bulldozer-die.jpg

It just doesn't look the same.

Add to the fact that its supposed to be 494mm^2 and it has the same L2 as well as L3, it has to be full blown cores, not modules.

Still I doubt it since there has been nothing on the roadmaps to back this up nor anything official from AMD. And why would they use the 45nm process when GF is probably moved over to 32nm?

Well I find the possibility intriguing. Why does a new and better cpu need to be done with a smaller process? In a sense, and this is perhaps unique to my experiences, in a sense the drive to using smaller processes for CPUs is like the drive to use fusion for power generation. I know that the real answer isn't nuclear energy, it's electromagnetics (free energy, zero point energy.) I can't help but wonder if the next real step up in CPU design has nothing to do with a smaller process but rather a different way of doing things.
 
Wow ... mental gymnastics. You'd win the gold in that event.

Your talents are being wasted at a computer store, you need to apply for a position in a legal department.

And again, it's not about Intel optimizing, its about them deliberately disabling extensions on their competitors CPU's. This isn't for you to argue, Intel admitted to it in court and has signed an agreement not to do it in the future. Took them over 2 years to remove the block on their code for SSE2 instructions for non-Intel CPUs and they were forced at legal gun point to do it. Even now their most recent compiler will only allow SSE2 instructions on other CPU's while using SSE3, SSE4 and AVX on Intel branded CPUs.

And all those games that Intel provides SDK's and support for? Guess what, their using (most of them) the Intel compiler for their programs. If you want to see this just run those programs on a Via Nano platform. Benchmark them, then change the CPUID to Intel and benchmark them again. *BAM* instant increase in performance, all you had to do was trick the program into thinking it was running on an Intel CPU.

And if you can't see that as wrong and unethical, the deliberate disabling of features without user consent (and previously without customer consent), then you've become as bad as the iSheeple.

You seem to always get bent out of shape and somehow cover up insults. I really don't appreciate it. I also disagree but hey, welcome to the internet (and if you are (America) where we can exchange ideas and disagree.

I think all Intel did was set it to see what CPU it was and enable the switches. Sure they stopped, thats great. But as for wrong? I don't see how. And as said before, the majority uses MS compiler, not Intels or anyone elses.

Well I find the possibility intriguing. Why does a new and better cpu need to be done with a smaller process? In a sense, and this is perhaps unique to my experiences, in a sense the drive to using smaller processes for CPUs is like the drive to use fusion for power generation. I know that the real answer isn't nuclear energy, it's electromagnetics (free energy, zero point energy.) I can't help but wonder if the next real step up in CPU design has nothing to do with a smaller process but rather a different way of doing things.

I don't see anything wrong with it per say but I don't see the logic in it. They have a smaller process node which would shrink the die space allowing for higher yields. A die size that large would have very bad yields to start considering it would also yield less die per wafer. The new process node also has benefits of less power used as well and lower thermals, normally.

So to me it would make no sense to use a older process.

And as I said, if thats the legit die shot, its not BD based it would be Deneb/Thuban based:

attachment.php


That "8 core Phenom II" looks like two Deneb cores "stitched" together almost.
 
You seem to always get bent out of shape and somehow cover up insults. I really don't appreciate it. I also disagree but hey, welcome to the internet (and if you are (America) where we can exchange ideas and disagree.

I think all Intel did was set it to see what CPU it was and enable the switches. Sure they stopped, thats great. But as for wrong? I don't see how. And as said before, the majority uses MS compiler, not Intels or anyone elses.

Except that's not what happened. Not at all.

They saw it wasn't an Intel CPU and disabled them in the chosen code path. There are no "switches" required to enable SIMD instructions, their always enabled. You simply chose to use code that has them or use code that doesn't. Intel's compiler put SIMD instructions down Intel's CPU's and prevented SIMD from being used on non-Intel CPU's.

This isn't a case of "enabling" these mysterious Intel features, it's a case of disabling them in software. Don't you get it, Intel wasn't enhancing it's own CPU's performance, they were hamstringing and disabling the performance of their competitors CPU's. You make it sound like they were doing you a favor or something good when they were doing something that is very VERY bad. They didn't stop it, only doing it to a lessor extent. And they had to be forced by a court to stop doing it. This wasn't a voluntary decision on their part.
 
This isn't for you to argue, Intel admitted to it in court and has signed an agreement not to do it in the future.
What court was that, exactly?

As for enabling or disabling code generation, how is Intel supposed to ensure that other manufacturers actually implemented the instructions their feature flags say they implented? Are you aware that AMD once put out a mobile chip which reported it had SSE (I believe it was SSE), but did not actually implement that instruction set? Now what would happen to code compiled based on those flags and run on that chip? Kaboom!

Let's agree on one thing - these days compilers should just use the feature flags. I have no idea why they don't. But it may not be for the reasons you presume.
 
What court was that, exactly?

As for enabling or disabling code generation, how is Intel supposed to ensure that other manufacturers actually implemented the instructions their feature flags say they implented? Are you aware that AMD once put out a mobile chip which reported it had SSE (I believe it was SSE), but did not actually implement that instruction set? Now what would happen to code compiled based on those flags and run on that chip? Kaboom!

Let's agree on one thing - these days compilers should just use the feature flags. I have no idea why they don't. But it may not be for the reasons you presume.


GCC and MSC do use feature flags. And if you attempt to execute code not supported then it just won't run period. Of course neither GCC nor MSC produced code that didn't run. And as far as I know there never was an AMD CPU reporting SSE when it didn't have it.

That link I gave earlier to Agner's site lists the lawsuit and the results from it. I have provided all the information required for anyone to do their own research and reach their own conclusions. And while anyone privy those the exact motive behind the decision to disable extensions on non-Intel CPUs, I find it rather hard to believe that the best engineers in the world would make such a poor engineering decision. Intel has a past history of attempting to sabotage it's competitors, this history goes to the late 80's early 90's. Companies make multimillion USD decisions based on performance data from hardware platforms. For any company to rig that data such as to provide themselves an advantage is very bad for business. These are the reasons Spec and other industrial benchmarking suites will provide the information and settings on their compiler used.
 
http://www.agner.org/optimize/blog/read.php?i=49

2.3 TECHNICAL PRACTICES

Intel shall not include any Artificial Performance Impairment in any Intel product or require any Third Party to include an Artificial Performance Impairment in the Third Party’s product. As used in this Section 2.3, “Artificial Performance Impairment” means an affirmative engineering or design action by Intel (but not a failure to act) that (i) degrades the performance or operation of a Specified AMD product, (ii) is not a consequence of an Intel Product Benefit and (iii) is made intentionally to degrade the performance or operation of a Specified AMD Product. For purposes of this Section 2.3, “Product Benefit” shall mean any benefit, advantage, or improvement in terms of performance, operation, price, cost, manufacturability, reliability, compatibility, or ability to operate or enhance the operation of another product.

In no circumstances shall this Section 2.3 impose or be construed to impose any obligation on Intel to (i) take any act that would provide a Product Benefit to any AMD or other non-Intel product, either when such AMD or non-Intel product is used alone or in combination with any other product, (ii) optimize any products for Specified AMD Products, or (iii) provide any technical information, documents, or know how to AMD.

The FTC's antitrust investigation against Intel

Intel sought to undercut the performance advantage of non-Intel x86 CPUs relative to Intel x86 CPUs when it redesigned and distributed software products, such as compilers and libraries.
[...]
To the public, OEMs, ISVs, and benchmarking organizations, the slower performance of non-Intel CPUs on Intel-compiled software applications appeared to be caused by the non-Intel CPUs rather than the Intel software. Intel failed to disclose the effects of the changes it made to its software in or about 2003 and later to its customers or the public. Intel also disseminated false or misleading documentation about its compiler and libraries. Intel represented to ISVs, OEMs, benchmarking organizations, and the public that programs inherently performed better on Intel CPUs than on competing CPUs. In truth and in fact, many differences were due largely or entirely to the Intel software. Intel’s misleading or false statements and omissions about the performance of its software were material to ISVs, OEMs, benchmarking organizations, and the public in their purchase or use of CPUs. Therefore, Intel’s representations that programs inherently performed better on Intel CPUs than on competing CPUs were, and are, false or misleading. Intel’s failure to disclose that the differences were due largely to the Intel software, in light of the representations made, was, and is, a deceptive practice. Moreover, those misrepresentations and omissions were likely to harm the reputation of other x86 CPUs companies, and harmed competition.
[...]
Some ISVs requested information from Intel concerning the apparent variation in performance of identical software run on Intel and non-Intel CPUs. In response to such requests, on numerous occasions, Intel misrepresented, expressly or by implication, the source of the problem and whether it could be solved.
[...]
Intel’s software design changes slowed the performance of non-Intel x86 CPUs and had no sufficiently justifiable technological benefit. Intel’s deceptive conduct deprived consumers of an informed choice between Intel chips and rival chips, and between Intel software and rival software, and raised rivals’ costs of competing in the relevant CPU markets. The loss of performance caused by the Intel compiler and libraries also directly harmed consumers that used non-Intel x86 CPUs.

FTC's Remedy

Requiring that, with respect to those Intel customers that purchased from Intel a software compiler that had or has the design or effect of impairing the actual or apparent performance of microprocessors not manufactured by Intel ("Defective Compiler"), as described in the Complaint:

Intel provide them, at no additional charge, a substitute compiler that is not a Defective Compiler;
Intel compensate them for the cost of recompiling the software they had compiled on the Defective Compiler and of substituting, and distributing to their own customers, the recompiled software for software compiled on a Defective Compiler; and
Intel give public notice and warning, in a manner likely to be communicated to persons that have purchased software compiled on Defective Compilers purchased from Intel, of the possible need to replace that software.


Lots more detail in Agner's post.

Please tell me how good Intel is and how right they are of this practice.
 
Agner's analysis of Intel's compiler and libraries.


Preliminary test results for Matlab

Fast Fourier Transform (Lower is better)
VIA: 0.4415
AMD: 0.4502
INTEL: 0.2729

Built-in benchmark test on Matlab v. 7.11, 32 bit, Windows 7, VIA Nano L3050, 1.8 GHz. Average of 10 measurements.

This is just an example, he did many more tests and provides files and information on how to run your own tests.
 
GCC and MSC do use feature flags. And if you attempt to execute code not supported then it just won't run period. Of course neither GCC nor MSC produced code that didn't run. And as far as I know there never was an AMD CPU reporting SSE when it didn't have it.

That link I gave earlier to Agner's site lists the lawsuit and the results from it. I have provided all the information required for anyone to do their own research and reach their own conclusions. And while anyone privy those the exact motive behind the decision to disable extensions on non-Intel CPUs, I find it rather hard to believe that the best engineers in the world would make such a poor engineering decision. Intel has a past history of attempting to sabotage it's competitors, this history goes to the late 80's early 90's. Companies make multimillion USD decisions based on performance data from hardware platforms. For any company to rig that data such as to provide themselves an advantage is very bad for business. These are the reasons Spec and other industrial benchmarking suites will provide the information and settings on their compiler used.

More interesting is this:

http://software.intel.com/sites/products/documentation/hpc/composerxe/en-us/cpp/lin/main/main_welcome.htm
Intel® Compiler includes compiler options that optimize for instruction sets that are available in both Intel® and non-Intel microprocessors (for example SIMD instruction sets), but do not optimize equally for non-Intel microprocessors. In addition, certain compiler options for Intel® Compiler are reserved for Intel microprocessors. For a detailed description of these compiler options, including the instruction sets they implicate, please refer to "Intel® Compiler User and Reference Guides > Compiler Options". Many library routines that are part of Intel® Compiler are more highly optimized for Intel microprocessors than for other microprocessors. While the compilers and libraries in Intel® Compiler offer optimizations for both Intel and Intel-compatible microprocessors, depending on the options you select, your code and other factors, you likely will get extra performance on Intel microprocessors.

While the paragraph above describes the basic optimization approach for Intel® Compiler, with respect to Intel's compilers and associated libraries as a whole, Intel® Compiler may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include Intel® Streaming SIMD Extensions 2 (Intel® SSE2), Intel® Streaming SIMD Extensions 3 (Intel® SSE3), and Supplemental Streaming SIMD Extensions 3 (Intel® SSSE3) instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors.

Intel recommends that you evaluate other compilers to determine which best meet your requirements.

As sonoran said, no one can guarntee what SIMDs are available. Intel even states its best to evaluate other compilers.
 
More interesting is this:

http://software.intel.com/sites/products/documentation/hpc/composerxe/en-us/cpp/lin/main/main_welcome.htm
Intel® Compiler includes compiler options that optimize for instruction sets that are available in both Intel® and non-Intel microprocessors (for example SIMD instruction sets), but do not optimize equally for non-Intel microprocessors. In addition, certain compiler options for Intel® Compiler are reserved for Intel microprocessors. For a detailed description of these compiler options, including the instruction sets they implicate, please refer to "Intel® Compiler User and Reference Guides > Compiler Options". Many library routines that are part of Intel® Compiler are more highly optimized for Intel microprocessors than for other microprocessors. While the compilers and libraries in Intel® Compiler offer optimizations for both Intel and Intel-compatible microprocessors, depending on the options you select, your code and other factors, you likely will get extra performance on Intel microprocessors.

While the paragraph above describes the basic optimization approach for Intel® Compiler, with respect to Intel's compilers and associated libraries as a whole, Intel® Compiler may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include Intel® Streaming SIMD Extensions 2 (Intel® SSE2), Intel® Streaming SIMD Extensions 3 (Intel® SSE3), and Supplemental Streaming SIMD Extensions 3 (Intel® SSSE3) instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors.

Intel recommends that you evaluate other compilers to determine which best meet your requirements.

As sonoran said, no one can guarntee what SIMDs are available. Intel even states its best to evaluate other compilers.

Ahh I was waiting for you to do this. It proves you didn't do your homework, you didn't even read the above posts.

What your reading is Intel's updated disclaimer.

From the FTC,

Requiring that, with respect to those Intel customers that purchased from Intel a software compiler that had or has the design or effect of impairing the actual or apparent performance of microprocessors not manufactured by Intel ("Defective Compiler"), as described in the Complaint:

Intel provide them, at no additional charge, a substitute compiler that is not a Defective Compiler;
Intel compensate them for the cost of recompiling the software they had compiled on the Defective Compiler and of substituting, and distributing to their own customers, the recompiled software for software compiled on a Defective Compiler; and
Intel give public notice and warning, in a manner likely to be communicated to persons that have purchased software compiled on Defective Compilers purchased from Intel, of the possible need to replace that software.


Prior to the investigation / settlement Intel didn't have that disclaimer and even lied to customers about the source of the performance discrepancy.

From FTC's investigation

ntel sought to undercut the performance advantage of non-Intel x86 CPUs relative to Intel x86 CPUs when it redesigned and distributed software products, such as compilers and libraries.
[...]
To the public, OEMs, ISVs, and benchmarking organizations, the slower performance of non-Intel CPUs on Intel-compiled software applications appeared to be caused by the non-Intel CPUs rather than the Intel software. Intel failed to disclose the effects of the changes it made to its software in or about 2003 and later to its customers or the public. Intel also disseminated false or misleading documentation about its compiler and libraries. Intel represented to ISVs, OEMs, benchmarking organizations, and the public that programs inherently performed better on Intel CPUs than on competing CPUs. In truth and in fact, many differences were due largely or entirely to the Intel software. Intel’s misleading or false statements and omissions about the performance of its software were material to ISVs, OEMs, benchmarking organizations, and the public in their purchase or use of CPUs. Therefore, Intel’s representations that programs inherently performed better on Intel CPUs than on competing CPUs were, and are, false or misleading. Intel’s failure to disclose that the differences were due largely to the Intel software, in light of the representations made, was, and is, a deceptive practice. Moreover, those misrepresentations and omissions were likely to harm the reputation of other x86 CPUs companies, and harmed competition.
[...]
Some ISVs requested information from Intel concerning the apparent variation in performance of identical software run on Intel and non-Intel CPUs. In response to such requests, on numerous occasions, Intel misrepresented, expressly or by implication, the source of the problem and whether it could be solved.
[...]
Intel’s software design changes slowed the performance of non-Intel x86 CPUs and had no sufficiently justifiable technological benefit. Intel’s deceptive conduct deprived consumers of an informed choice between Intel chips and rival chips, and between Intel software and rival software, and raised rivals’ costs of competing in the relevant CPU markets. The loss of performance caused by the Intel compiler and libraries also directly harmed consumers that used non-Intel x86 CPUs.

Their investigation findings

http://www.ftc.gov/os/adjpro/d9341/091216intelcmpt.pdf

Agner even talked to Intel Engineers in January 2007 about the performance discrepancy and they lied to him.

Please continue defending Intel on this regard. It only shows how blind and prejudice you've made yourself. All the information is there in black and white, there was several complains not only from AMD but also from ISV's. FTC did an investigation and found Intel doing bad things. FTC issued remedies that Intel must follow to correct the bad behavior, this is separate from the AMD - Intel legal settlement.
 
Agner's post from 2005 referencing his 2004 discovery's.

http://yro.slashdot.org/comments.pl?sid=155593&cid=13042922

by Eponymous Cowboy (706996) * on Tuesday July 12 2005, @11:43AM (#13042922)
I noticed this problem back in January of 2004, with Intel C++ 8.0, and went through heck over nine months with Intel's customer support to get it fixed until I eventually had to abandon their compiler.

On any non-Intel processors, it specifically included an alternate code path for "memcpy" that actually used "rep movsb" to copy one byte at a time, instead of (for example) "rep movsd" to copy a doubleword at a time (or MMX instructions to copy quadwords). This was probably the most brain-dead memcpy I'd ever seen, and was around 4X slower than even a typical naive assembly memcpy:

push ecx
shr ecx, 2
rep movsd
pop ecx
and ecx, 3
rep movsb

They responded with completely ridiculous answers, such as:

"Our 8.0 memcpy was indeed optimized for a Pentium(r)4 Processor,when we reworked this routine we used the simplest, most robust, and straightforward implementation for older processors so that we didn't need the extra code to check for alignment, length, overlap, and other conditions."

BS. I went and added the following line to the beginning of my source code:

extern "C" int __intel_cpu_indicator;

then I added:

__intel_cpu_indicator = -512;

to the "main" function.

This forced Intel C++ to use the "Pentium 4" memcpy regardless of which processor in in the machine. It turns out that their special "Pentium 4" memcpy which I tested thoroughly in all kinds of situations, and it worked perfectly fine on an AMD Athlon and a Pentium III. I pointed this out to them.

I received the following response:

"The fast mempcy is over 2000 lines of hand coded assembly, with lots of special cases where different code paths are chosen based on relative alignment of the source and destination. ... If the performance of memcpy/memset only are improved for Pentium III will that satisfy you?"

I answered "No," saying that I needed support for AMD processors as well. I also gave them a copy of my own memcpy routine that was 50% faster than theirs--and just used MMX. They closed the support issue and did nothing to resolve it.

I switched back to Visual C++.

It is one thing to not help a competitor, but to go out of your way to set them back is unethical. In this case, it appears that Intel programmed its compiler to give a non-Intel chip memcopy instructions that are far below the minimum standard of any chip capable of running current X86 code.

The original code path can run on any x86 chip! Why was an alternate code path even created? Now, if the check said "if thiscpu_mmx_enabled == true, then use fast mmx path, else use slow non-mmx path" then it'd be a legitimate optimization with built in compatibility.

But the compiler doesn't care whether a chip is capable or not, it just cares that it's Intel or not.

Given that Intel and AMD are basically the only major players in the x86 CPU market, there seems to be no significant difference between detecting Intel vs non-Intel and detecting Intel vs AMD.

But a MUCH more basic question is why Intel's compiler designers should give a soaring adlunar coition what chip the compiler is running on. If the compiler runs well on a real Intel chip, the job's done. THERE IS NO NEED to detect non-intel chips for ANY legitimate reason. If the third-party chip is *really* Intel-c

They've been doing this a very long time and eventually got caught.
 
Agner's post from 2005 referencing his 2004 discovery's.

...

They've been doing this a very long time and eventually got caught.

Wow, lol. You guys have been challenging palladin and he just outclasses you. He's not the average guy you can just talk down, is he.

Anyway, I sure have learned a few things. I don't mind giving my opinion that Intel can learn to be more ethical, if they haven't yet. Meh, they haven't yet.

Really refreshing to me to see someone exhibit some real knowledge, having seen so many argumentative techniques that while winning the argument (in their minds) does not so much reveal the truth.
 
That would make no sense. It would be taking a brand that is known for one thing and making it something less. Wait they did that already. The FX line was known to be top of the line...

Still IF it was true:

http://www.planet3dnow.de/photoplog/file.php?n=19534&w=o

That is not a BD die. That looks more like a 8 core CPU, not a module approach. Then again hard to say but compared to a BD die:

http://hothardware.com/articleimages/Item1742/bulldozer-die.jpg

It just doesn't look the same.

Add to the fact that its supposed to be 494mm^2 and it has the same L2 as well as L3, it has to be full blown cores, not modules.

Still I doubt it since there has been nothing on the roadmaps to back this up nor anything official from AMD. And why would they use the 45nm process when GF is probably moved over to 32nm?
I think somone got april fooled 😱

This one is pretty good too http://www.chipchick.com/2012/04/sony-vaio-q.html
 
lThe problem isn't necessarily the CPU, but the coding. Either Intel is pushing for such piss poor code that it will only run on single core super high ipc comuputers (ie thiers), or they are changing the parameters when AMD cpus are detected (ie disabling SSE extensions.)

Call it a massive gpu situation if you want, truth is the game is so well coded that it will run on as much hardware as you can possibly throw at it, CPU AND GPU (metro 2033, BF3, Dirt 3, RE 5, ect ect...) instead of being forced to run the entire game through one or two threads. (skyrim, SC2, FarCry 2, ect)

It only makes sense to force a game to run on 1-2 threads for Intel that way they can sell their I3 series at a higher price since the performance is sooo much better for those particular games.

http://media.bestofmicro.com/7/G/330748/original/F1 2010 High No AA.pnghttp://media.bestofmicro.com/7/B/330743/original/Crysis High No AA.png

After all, how many people would even consider recommending an I3 if this was the situation all the time instead on just games that can utilize as much hardware as it can handle(wich seemed to be too much graphics for the I3)?

Here's a pro-tip: Open Task Manager, go to View->Select Columns, and add "threads". Now run the above games and see how many threads they use. Then find a better argument.
 
Havok has been around for years and is widley used, not due to Intel optimization but due to the great physics. I love Havok. Came out with the ability to pin enemies to the walls with bolt guns and such.

I HATE Havok, since improvements in physics implementations have basically died off since it came out. I view physics as being the next real enhancement, as I suspect Rasterization is simply tapped out at this point, and it will be some time until we have the GPU power needed for Ray-Tracing...
 
Wow ... mental gymnastics. You'd win the gold in that event.

Your talents are being wasted at a computer store, you need to apply for a position in a legal department.

And again, it's not about Intel optimizing, its about them deliberately disabling extensions on their competitors CPU's. This isn't for you to argue, Intel admitted to it in court and has signed an agreement not to do it in the future. Took them over 2 years to remove the block on their code for SSE2 instructions for non-Intel CPUs and they were forced at legal gun point to do it. Even now their most recent compiler will only allow SSE2 instructions on other CPU's while using SSE3, SSE4 and AVX on Intel branded CPUs.

And all those games that Intel provides SDK's and support for? Guess what, their using (most of them) the Intel compiler for their programs. If you want to see this just run those programs on a Via Nano platform. Benchmark them, then change the CPUID to Intel and benchmark them again. *BAM* instant increase in performance, all you had to do was trick the program into thinking it was running on an Intel CPU.

And if you can't see that as wrong and unethical, the deliberate disabling of features without user consent (and previously without customer consent), then you've become as bad as the iSheeple.

It is not Intels job to ensure the extensions to X86 that THEY created work as intended on non-Intel CPU's. If the Intel compiler spits out code that for whatever reason doesn't work on a AMD chip, who do you think is getting sued?

Coincidentally, there ARE some implementation differences between Intel's and AMD's SSE implementations...
 
Any news out of AMD land about the PileDriver (hint original purpose of this thread)

Frankly the mods should create a new thread for the Intel/AMD bashers so they can take their squabbles there. I'm simply interested in any news about the PileDriver, such as release date, improvements over Bulldozer etc.

The legal ranglings about Intel/AMD are another subject for another thread!
 
well this is an older article from febuary
http://www.tomshardware.com/news/AMD-Trinity-APU-CPU-radeon,14699.html

but it gives a good summary of what is going on

there is this
http://www.bizjournals.com/austin/blog/morning_call/2012/03/amd-closes-tech-firm-acquisition.html?ana=yfcpc


but really been quiet as far as Piledriver news
so that is why this thread has slightly derailed
the good thing is that it has been very informative
even the heated arguements have alot of good tech info
I have done more googling about different tech subjects and learned alot about
CPU architecture because of this thread

even the legal history between Intel and AMD is interesting as long as there is no new news about AMD Trinity/Piledriver

just my opinion
I could be wrong.....
 
I think somone got april fooled 😱

This one is pretty good too http://www.chipchick.com/2012/04/sony-vaio-q.html

I actually don't trust it. WOuldn't make much sense to throw a new 8 core CPU n a older process.

Still it wouldn't hurt AMD to do it.

It is not Intels job to ensure the extensions to X86 that THEY created work as intended on non-Intel CPU's. If the Intel compiler spits out code that for whatever reason doesn't work on a AMD chip, who do you think is getting sued?

Coincidentally, there ARE some implementation differences between Intel's and AMD's SSE implementations...

I tried to state that, still doesn't seem to be correct. I guess Intel needs to start deving their SDKs to ensure proper function of other manufactures hardware and AMD needs to also make sure their Radeons perform the best on both CPUs.

Any news out of AMD land about the PileDriver (hint original purpose of this thread)

Frankly the mods should create a new thread for the Intel/AMD bashers so they can take their squabbles there. I'm simply interested in any news about the PileDriver, such as release date, improvements over Bulldozer etc.

The legal ranglings about Intel/AMD are another subject for another thread!

I really don't see the big deal in any of it. COmpanies will optimize for hardware when they are funded.

As for PD, I doubt we will get anything solid until at least a month before its realease.
 
Status
Not open for further replies.

TRENDING THREADS