AMD Piledriver rumours ... and expert conjecture

Page 140 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
Heh, looks like it is possible to change the CPUID, but not very straightforward:

It is possible to change the CPUID of AMD processors by using the AMD virtualization instructions. I hope that somebody will volunteer to make a program for this purpose. This will make it easy for anybody to check if their benchmark is fair and to improve the performance of software compiled with the Intel compiler on AMD processors.

http://www.agner.org/optimize/blog/read.php?i=49

Cheers!
 
Id bet if anyone did that, intel's lawyers would be calling within the hour. If the truth came out just how far intel went, they would never be able to sell that software again.

If it was the other way around, they would be happy to show everyone just how bad amd cpus really are.
 
Here is some interesting reading information.

http://www.agner.org/optimize/blog/read.php?i=49#49

Even after being told to fix it, they only "sorta" fixed it. Now you have to run the compiler with a special command to skip the vendor id check. Default still only uses SSE for GenuineIntel. Even that doesn't necessarily run the correct path, just makes it work a little bit better.

/QxO

Enables SSE3, SSE2 and SSE instruction sets optimizations for non-Intel CPUs

Here is some more reading material on it.

Ok, ICC is rarely used so AMD customers won't be so damaged, but Intel benchmarks AMD's processors using ICC and use this info for marketing, wich may hurt AMD's business, for example:

I just tried Intel C++ compiler version 10.1 with option /QxO as you suggested. It generates the following versions of code for common mathematical functions: SSE2, SSE3, SSE4.1 and non-Intel SSE2. It doesn't work on any CPU prior to SSE2. This is the only compiler option that makes it run reasonably on an AMD, but why are there two different SSE2 versions, one for Intel and one for AMD? When I hack the CPU-dispatcher and makes it believe that it is an Intel, it runs 50 - 100 % faster.

http://aceshardware.freeforums.org/cpuid-family-bits-added-because-of-flaw-in-intel-compiler-t428.html

Makes sense; AMD had its own SSE2 implementation for a while. And again, I fail to see why Intel should be forced to optimize its software for a competitors product. Whats next; a NVIDIA driver that works for AMD cards?
 
Sorry I'm getting into this late.

I read that the new AMD Trinity is going to allow CrossFire with their onboard video paired with a matching video card, do we know anything about that performance? And does Ivy Bridge have that functionality?
 
Earl and Chad ... my trigger finger is getting itchy ...

Unless you both want a few days away from the forums I suggest you wander off and stay out of this thread ... since your both just trying to cause trouble baiting other users.
So any irrational or over the top attacks against Intel is perfectly fine, but to suggest that the attack is irrational and over the top is problematic?

 
Sorry I'm getting into this late.

I read that the new AMD Trinity is going to allow CrossFire with their onboard video paired with a matching video card, do we know anything about that performance? And does Ivy Bridge have that functionality?

Llano also provided the ability to CrossFire with a discrete GPU, but it was limited. For instance, I believe the A8 and A6 models would support HD 6670/6570/6450 cards while the A4 models would only support HD 6450/6350 cards. Trinity will also have this capability, but I am unaware what each model will support.

It is my understanding that Ivy Bridge does not support this function. I could be mistaken, however.
 
Llano also provided the ability to CrossFire with a discrete GPU, but it was limited. For instance, I believe the A8 and A6 models would support HD 6670/6570/6450 cards while the A4 models would only support HD 6450/6350 cards. Trinity will also have this capability, but I am unaware what each model will support.

It is my understanding that Ivy Bridge does not support this function. I could be mistaken, however.

Out of curiosity, why would you even do this? I mean, you'd get much better performance out of a single 6760, which isn't that expensive...
 
as far as i know:
top trinity apu will offer up to 30% more application performance (nothing about cpu, clockrate, turbo, hardware related etc), up to 56% more gfx performance (again, nothing about hardware specs) compared to llano. from amd's promo slides.
top igpu could be similar to 6670 or 6750 but nothing higher as it will start to approach 7750 territory.
possibly dual gfx with radeon 7600 cards and lower. might even dual gfx with 6670. these will depend on amd's driver support.. or lack of it.
top desktop apu will be unlocked, 100w tdp, built on glofo's 32 nm node.

intel's mobile cpus have capability to switch with discreet and igpu. in desktop, intel gets help from lucid logix's virtu software. intel doesn't have their own discreet gpu to crossfire with, and their gpu architecture, drivers are different from amd and nvidia.
 
Out of curiosity, why would you even do this? I mean, you'd get much better performance out of a single 6760, which isn't that expensive...

It makes more sense in the laptop space.

Cost, power, heat is reduced with the cheaper card.

Marketing wise it sounds a little better to turn on a 2nd card to double performance instead of just switching to a different GPU. If you have to switch GPUs that makes the first one sound weak.
 
Makes sense; AMD had its own SSE2 implementation for a while. And again, I fail to see why Intel should be forced to optimize its software for a competitors product. Whats next; a NVIDIA driver that works for AMD cards?
2 reasons. 1) AMD paid Intel for the hardware implementation. Intel stabbed them in the back by disabling it through software.
2) If Intel is uncontrolled, they would force their compiler down the throats of as many vendors as they possibly could just to cripple AMD cpus beyond what they already are.

Just look how bad AMD already gets bashed as a gaming cpu because these reviews are using Intel optimized games. Now imagine that 10 fold. AMD would be out of business in a year, or at least reduced to VIA status.

BTW, looks like AMD is finally starting to push some software to the direction they are heading before they get there instead of after.

http://blogs.amd.com/fusion/2012/04/24/adobe-and-amd-enable-brilliant-experiences/
 
It makes more sense in the laptop space.

Cost, power, heat is reduced with the cheaper card.

Marketing wise it sounds a little better to turn on a 2nd card to double performance instead of just switching to a different GPU. If you have to switch GPUs that makes the first one sound weak.

But remember, its crossfire with the APU, so you have to take into account the power/heat from BOTH cards.
 
2 reasons. 1) AMD paid Intel for the hardware implementation. Intel stabbed them in the back by disabling it through software.

In case anyone here forgets: SSE is an Intel specification. AMD has its own implementation of that specification. My point being, is there anything in the X86 license agreement that even gives AMD the explicit right to use Intels SSE to begin with?

Secondly, its Intels compiler, optimized for Intels CPU's. Since the underlying implementations of both X86 and SSE instructions are different between Intel and AMD, there DOES exist the possibility, however remote, that if the compiler is very tightly optimized for Intel, forcing SSE on AMD could produce some issues...

Finally, its an Intel product; there is no reason why they should be forced to optimise for a competitors product. If AMD doesn't like it, they can either get devs to manually insert SSE commands directly into code, or convince them to use an alternative compiler.

2) If Intel is uncontrolled, they would force their compiler down the throats of as many vendors as they possibly could just to cripple AMD cpus beyond what they already are.

Just look how bad AMD already gets bashed as a gaming cpu because these reviews are using Intel optimized games. Now imagine that 10 fold. AMD would be out of business in a year, or at least reduced to VIA status.

And you automatically assume they were compiled with Intels compiler. And you are making excuses which may or not be true.

Besides, its called business. If AMD can't push a viable alternative, then too bad for them.
 
In case anyone here forgets: SSE is an Intel specification. AMD has its own implementation of that specification. My point being, is there anything in the X86 license agreement that even gives AMD the explicit right to use Intels SSE to begin with?

Secondly, its Intels compiler, optimized for Intels CPU's. Since the underlying implementations of both X86 and SSE instructions are different between Intel and AMD, there DOES exist the possibility, however remote, that if the compiler is very tightly optimized for Intel, forcing SSE on AMD could produce some issues...

It IS the cross liscence agreement and one of the main reasons AMD has it. Without it AMD can't even implement SSE of any sort, as well Intel would lose x86-64. The way you are defending Intel makes it sound like you work directly for them, or you are at least blinded by their cult worshiping. Thats Intel's excuse for doing it in the first place, "it may not work, so we won't even test it or allow it to function."

If you read above, when SSE was implemented on an AMD processor with a spoofed "GenuineIntel" string, it ran 50% faster. Obvously it works, the problem would be trying to spoof every program made to trick it to run as an intel cpu on and amd machine. If you can prove that AMD can't function with Intel coding, then I would be more inclined to believe the lies that Intel spreads about how AMD's SSE instructions are broken.

Finally, its an Intel product; there is no reason why they should be forced to optimise for a competitors product. If AMD doesn't like it, they can either get devs to manually insert SSE commands directly into code, or convince them to use an alternative compiler.

If it was that simple, it would have been done. How much money would it take to get it done? say for example AMD pais 100M to get code implemented, Intel counters with 500M to get it undone ... Its simple to just assume the answer is as simple as saying "just do it", but in business its a matter of affording it or counter-killing the deal. Obvously its not a game AMD can win, Intel has too much credibility to lose.

In an uncontrolled market, Intel would have already slaughtered AMD and any other company that even thought of making a cpu of any kind.

And you automatically assume they were compiled with Intels compiler. And you are making excuses which may or not be true.

Besides, its called business. If AMD can't push a viable alternative, then too bad for them.
Viable alternative for 20% of the market share, ... who would use it? As for excuses, do you really believe that a 2.6 ghz Intel cpu is faster than a 4.5+ ghz AMD on an equal playing field?

The way you talk makes it seem you would be happy with AMD out of business at the hands of Intel.
 
It IS the cross liscence agreement and one of the main reasons AMD has it. Without it AMD can't even implement SSE of any sort, as well Intel would lose x86-64. The way you are defending Intel makes it sound like you work directly for them, or you are at least blinded by their cult worshiping. Thats Intel's excuse for doing it in the first place, "it may not work, so we won't even test it or allow it to function."

If you read above, when SSE was implemented on an AMD processor with a spoofed "GenuineIntel" string, it ran 50% faster. Obvously it works, the problem would be trying to spoof every program made to trick it to run as an intel cpu on and amd machine. If you can prove that AMD can't function with Intel coding, then I would be more inclined to believe the lies that Intel spreads about how AMD's SSE instructions are broken.

Finally, its an Intel product; there is no reason why they should be forced to optimise for a competitors product. If AMD doesn't like it, they can either get devs to manually insert SSE commands directly into code, or convince them to use an alternative compiler.

If it was that simple, it would have been done. How much money would it take to get it done? say for example AMD pais 100M to get code implemented, Intel counters with 500M to get it undone ... Its simple to just assume the answer is as simple as saying "just do it", but in business its a matter of affording it or counter-killing the deal. Obvously its not a game AMD can win, Intel has too much credibility to lose.

In an uncontrolled market, Intel would have already slaughtered AMD and any other company that even thought of making a cpu of any kind.

It's a cold day in hell I actually agree with you but on this front I do. It is no secrect intel STILL has a behind the scenes choke hold on AMD keeping them right where they want. Intel is nothing more than a big bully.
And you automatically assume they were compiled with Intels compiler. And you are making excuses which may or not be true.

Besides, its called business. If AMD can't push a viable alternative, then too bad for them.
Viable alternative for 20% of the market share, ... who would use it? As for excuses, do you really believe that a 2.6 ghz Intel cpu is faster than a 4.5+ ghz AMD on an equal playing field?

The way you talk makes it seem you would be happy with AMD out of business at the hands of Intel.
 
as far as i know:
top trinity apu will offer up to 30% more application performance (nothing about cpu, clockrate, turbo, hardware related etc), up to 56% more gfx performance (again, nothing about hardware specs) compared to llano. from amd's promo slides.
top igpu could be similar to 6670 or 6750 but nothing higher as it will start to approach 7750 territory.
possibly dual gfx with radeon 7600 cards and lower. might even dual gfx with 6670. these will depend on amd's driver support.. or lack of it.
top desktop apu will be unlocked, 100w tdp, built on glofo's 32 nm node.

intel's mobile cpus have capability to switch with discreet and igpu. in desktop, intel gets help from lucid logix's virtu software. intel doesn't have their own discreet gpu to crossfire with, and their gpu architecture, drivers are different from amd and nvidia.

Ahhh marketing slides. The best source of info until the real reviews come out.

I remember when they had those for the HD5 series. The HD5870 has 2x the TFLOPs of a HD4870 but not quite 2x the performance in games.

Thats just the way it is though.

Can't wait for some real reviews though.

On topic note, looks like part of the socket change for trinity was for eyefinity.

http://wccftech.com/amd-trinity-apus-piledriver-detailed-leaked-slides/

Should get interesting in 2 weeks.

So spec wise its about the same as a HD2900, minus the ring bus and 1GB of VRAM. A bit more shaders and clock speed is pretty close (I think the HD2900 was like 650MHz core with 320 SPUs).

I wouldn;t mind a comparison to it just for shnitz.

I find it strange that AMD is marketing trinity for eyefinity. Seems like a waste since you wouldn't really be able to do more than play some games on lowest or just use the screens for windows.

I would agree. Its great but for games it does seem pointless TBH. It would work if you CFX with a HD6670/7670 but if not then its pointless as games are meant to be played with their settings way up.

Speaking of game settings, anyone else see the "Recommended" Max Payne 3 PC specs? A i7-970/FX-8150, 16GB of RAM, a HD7970/GTX680 and 35GB of HDD space.

Anyone else remember a classic Unreal Tournament saying that can speak for that??? I sure do..... HOLY S........
 
"Speaking of game settings, anyone else see the "Recommended" Max Payne 3 PC specs? A i7-970/FX-8150, 16GB of RAM, a HD7970/GTX680 and 35GB of HDD space"
-JS

I think that is great that they are raising the level of recommended settings
I have a very modest gaming system (PHII deneb 3.5ghz 5770 1gb) but I am okay with 1920x1080 medium to high settings but I am glad that the game developers are pushing the limits of the hardware
I think that encourages hardware companies to push harder themselves
IMHO I think that the original Crysis plus Far Cry 2 helped create a demand for better hardware years ago
If a medium level rig can play all games at highest settings then there wouldnt be the demand for the higher level tech

 
Status
Not open for further replies.