AMD Piledriver rumours ... and expert conjecture

Page 141 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
"Speaking of game settings, anyone else see the "Recommended" Max Payne 3 PC specs? A i7-970/FX-8150, 16GB of RAM, a HD7970/GTX680 and 35GB of HDD space"
-JS

I think that is great that they are raising the level of recommended settings
I have a very modest gaming system (PHII deneb 3.5ghz 5770 1gb) but I am okay with 1920x1080 medium to high settings but I am glad that the game developers are pushing the limits of the hardware
I think that encourages hardware companies to push harder themselves
IMHO I think that the original Crysis plus Far Cry 2 helped create a demand for better hardware years ago
If a medium level rig can play all games at highest settings then there wouldnt be the demand for the higher level tech

I understand that but it is a bit strange since it looks like its using the GTA IV engine. But they are making every bullet shot rendered so that may take a toll or it may be the MP requirements due to the amount of bullets and bullet time at play.

I may need to move to a HD7970 to play it.
 
"I may need to move to a HD7970 to play it"
-JS

That statement proves my point perfectly
More demanding games creates a demand for better hardware
JS you have a HD 5870 which is still a very respectable card but because of MP 3 you want to upgrade and there will be plenty of others doing the same thing
the funny thing is in the desktop world that games have been the number one driving factor in hardware development
if you took away gaming then we would still be using dual cores and HD 5450 level machines
 
"I may need to move to a HD7970 to play it"
-JS

That statement proves my point perfectly
More demanding games creates a demand for better hardware
JS you have a HD 5870 which is still a very respectable card but because of MP 3 you want to upgrade and there will be plenty of others doing the same thing
the funny thing is in the desktop world that games have been the number one driving factor in hardware development
if you took away gaming then we would still be using dual cores and HD 5450 level machines

True but most games I play don't need the massive upgrades (I mostly play VALVe games using Source so even my older HD4870 would still suffice).

If anything my move is more because I want it.

Still its gotta either be overkill (they want to put the max most enthusiasts use to be safe) or it will be the next Crysis/Doom 3/ Half Life 2 (all which when they came out were not able to be maxed on current high end hardware).

Still it makes me wonder if these are recommended for PC, the console version must not look nearly as good as the PC version because nothing in any console can come close performance wise to that in specs.

I would also assume we will be waiting 2-3gens of APUs/IGPs to be able to play it at decent settings if not more.
 
Wow more misinformation being spread by the usual suspects.

Intel made MMX, AMD implemented MMX then made 3D Now as a super-set of MMX

MMX is officially a meaningless initialism trademarked by Intel[citation needed]; unofficially, the initials have been variously explained as standing for MultiMedia eXtension, Multiple Math eXtension, or Matrix Math eXtension.

AMD, during one of its numerous court battles with Intel, produced marketing material from Intel indicating that MMX stood for "Matrix Math Extensions". Since an initialism cannot be trademarked, this was an attempt to invalidate Intel's trademark.[3] In 1997, Intel filed suit against AMD and Cyrix Corp. for misuse of its trademark MMX. AMD and Intel settled, with AMD acknowledging MMX as a trademark owned by Intel, and with Intel granting AMD rights to use the MMX trademark as a technology name, but not a processor name.

Intel then later created SSE to fix some of the issues with MMX, the same issues 3D Now fixed. AMD gained access to the SSE instruction set through its non-expiring license to the x86 ISA and all extensions. This was the result of a court case in the 90's where the court sided with AMD saying that it's x86 license was not limited to the 386 and prior ISA's. Thus any extension that Intel creates AMD automatically gets unlimited access to. This license is non-transferable, AMD can not sub-license or sell this access to any other manufacturer.

So no, AMD did not implement it's "own" SSE. When AMD made 3D Now Intel's SSE didn't even exist and AMD was trying to correct some of the issues created with MMX. Intel did it better by creating SSE and thus AMD implemented Intel's SSE specification. AMD has since implemented each and every x86 extension that Intel creates typically within a year or two of Intel releasing the specification.

Eventually all this licensing nonsense was settled in the last court case where AMD authorized Intel access to the AMD64 64-bit extension of the x86 ISA. In exchange for this license Intel conceded to ceasing all litigation against AMD about it's x86 license and to stop the blocking of SSE instructions on AMD CPUs. It goes a bit deeper into OEM "partner programs" and what have you. Intel did not enter into this agreement by choice, the courts ordered them to settle it or have the courts settle it for them. Mind you courts rarely favor the big super-company bullying the smaller company in these matters, see the Microsoft anti-trust case.
 
Still its gotta either be overkill (they want to put the max most enthusiasts use to be safe) or it will be the next Crysis/Doom 3/ Half Life 2 (all which when they came out were not able to be maxed on current high end hardware).
And they all still look nice today (Doom 3 is starting to show its age IMO). Half-Life 2 seems to age better than anything else. It still looks so good!
 
It IS the cross liscence agreement and one of the main reasons AMD has it. Without it AMD can't even implement SSE of any sort, as well Intel would lose x86-64. The way you are defending Intel makes it sound like you work directly for them, or you are at least blinded by their cult worshiping. Thats Intel's excuse for doing it in the first place, "it may not work, so we won't even test it or allow it to function."

As far as I'm aware, the CLA with Intel gives both companies the right to use the others architecture. To my knowledge, there is nothing anywhere in the CLA that gurantees AMD the right to use any of Intel's additional specifications [MMX, SSE, AVX, etc].

If you read above, when SSE was implemented on an AMD processor with a spoofed "GenuineIntel" string, it ran 50% faster. Obvously it works, the problem would be trying to spoof every program made to trick it to run as an intel cpu on and amd machine. If you can prove that AMD can't function with Intel coding, then I would be more inclined to believe the lies that Intel spreads about how AMD's SSE instructions are broken.

"Obviously it works". As a software engineer, every time I hear that, I know the program in question is fundamentally flawed, as it hasn't been properly tested.

Here's the problem: Lets say for arguments sake the Intel compiler generates SSE code that doesn't quite work with AMD's implementation, and this causes some random program to crash. Which company do you think will get yelled at?

Intel made a compiler for Intel chips, and the only reason it works with AMD chips is because they share the same ISA. Intel is under no obligation to optimise for another implementation of the X86 architecture. If AMD put out a compiler that did the same exact thing, I'd have no complaints.

In an uncontrolled market, Intel would have already slaughtered AMD and any other company that even thought of making a cpu of any kind.

Well, that's Capitalism for you. You can't be all "Yay, Capitalism!", and then complain how company X would run company Y out of the market if it were allowed to. If AMD can't compete against Intel, they should focus their attention somewhere where they can make more profit. And if the markets dictate Intel become a monopoly, well, thats Capitalism for you.

Vable alternative for 20% of the market share, ... who would use it? As for excuses, do you really believe that a 2.6 ghz Intel cpu is faster than a 4.5+ ghz AMD on an equal playing field?

Not Intels problem if AMD can't generate enough market share to move the markets. As for your theoretical example, it wasn't that long ago where the reverse situation [a low clocked Athlon beating the pants off a P4] was true, so why is it so hard to believe?
 
Has there been any speculation on what the desktop Trinity processors will cost when they are released, or is it just assumed that they will mimic Llano initial costs?

The Trinity A10-5800K is pretty easy to match up to the Llano A8-3870K at $135, but I'm having trouble trying to figure out the rest of them. For instance, should the Trinity A8-5500 match up with the A6-3500 at ~$90 or with the A6-3650 at $115?
 
"Still its gotta either be overkill (they want to put the max most enthusiasts use to be safe) or it will be the next Crysis/Doom 3/ Half Life 2 (all which when they came out were not able to be maxed on current high end hardware)."

that is what we need though
we need games that cant be maxed on current hardware
it drives the hardware devs to put out more powerful tech
the past couple of years as far as required hardware has been rather low end
alot of gamers have been complaining the last couple of years that the games arent as spectactular as they used to be
there are times I bought games not just because I wanted to play them but because I wanted to see if my system could run them
I have a lower end gaming system but it has met most of the recommended system requirements for alot of games that have come out in the past two years which isnt good
gaming drives tech advancement in the home desktop market so I am glad to see a game coming out that is really pushing the limits like Max Payne 3
I hope more new games have some high end system requirements otherwise you get Nvidia and AMD just putting out rebadged GPUs for new models like AMD did with the HD 5xxx and HD 6xxx
 
"Still its gotta either be overkill (they want to put the max most enthusiasts use to be safe) or it will be the next Crysis/Doom 3/ Half Life 2 (all which when they came out were not able to be maxed on current high end hardware)."

that is what we need though
we need games that cant be maxed on current hardware
it drives the hardware devs to put out more powerful tech
the past couple of years as far as required hardware has been rather low end
alot of gamers have been complaining the last couple of years that the games arent as spectactular as they used to be
there are times I bought games not just because I wanted to play them but because I wanted to see if my system could run them
I have a lower end gaming system but it has met most of the recommended system requirements for alot of games that have come out in the past two years which isnt good
gaming drives tech advancement in the home desktop market so I am glad to see a game coming out that is really pushing the limits like Max Payne 3
I hope more new games have some high end system requirements otherwise you get Nvidia and AMD just putting out rebadged GPUs for new models like AMD did with the HD 5xxx and HD 6xxx

Theres a reason: The DX API is basically tapped out graphically. The few things you can add in now tend to be VERY computationally expensive to perform, and are subtle effects [refraction, for example]. There simply isn't much more you can do with Rasterization. Why do you think AMD/NVIDIA are spending so much time with shader based AA modes, or better Vsync implementations, or multiple-screen displays? They know this, and are trying to find ways to justify people buying newer cards.

That being said. some of the blame can be thrown on the engines; how many games run on Unreal 3?

I expect Ray Tracing to take off in a decade or so.
 
"Obviously it works". As a software engineer, every time I hear that, I know the program in question is fundamentally flawed, as it hasn't been properly tested.

Here's the problem: Lets say for arguments sake the Intel compiler generates SSE code that doesn't quite work with AMD's implementation, and this causes some random program to crash. Which company do you think will get yelled at?

Intel made a compiler for Intel chips, and the only reason it works with AMD chips is because they share the same ISA. Intel is under no obligation to optimise for another implementation of the X86 architecture. If AMD put out a compiler that did the same exact thing, I'd have no complaints.
There is a difference between optimizing and bastardizing. Intel claims optimizing but what they are doing is bastardizing AMD by disabling features that AMD supports. If Intel made code that would cause AMD chips to crash, that would be even better for Intel to say "hey look AMD cpus don't even work". Who would get yelled at? AMD, not Intel.

Instead they are simply disabling code on AMD chips because AMD cpus work with Intel's code. There is no reverse situation that you are theorizing.

Since you know more about programming than I do (I only took one semester of java and got bored), see if you can make some code that works on Intel but not AMD.
 
There is a difference between optimizing and bastardizing. Intel claims optimizing but what they are doing is bastardizing AMD by disabling features that AMD supports.

Wrong. If I manually put a inline SSE4 call into my code, guess what its going to compile as? SSE isn't being disabled, as you claim. Its simply not being automatically inserted during code optimization. Big difference.

The issue is one of optimization. Period. AMD isn't being handicapped, they aren't being optmized for. Big difference.

Now, if manual SSE instructions were physically blocked, THAT would be an issue Intel should be punished for. But failure to optimize for a competitor ranks rather low on the "evil" scale to me.
 
one easy way to test that theory, write some SSE4 coding for a benchmark and compile it with Intel and MS, see wich one runs faster on an AMD cpu.

And as I said before, IS it ok for any company to make any product for the sole purpose of degrading the competitor.

I think if that were an acceptable practice, we would be seeing it everywhere.
 
one easy way to test that theory, write some SSE4 coding for a benchmark and compile it with Intel and MS, see wich one runs faster on an AMD cpu.

...That sounds good in theory, as long as both CPU's have the EXACT same performance, and no other compiler optimizations were used that would affect the results.

The only way to test accurately, would be to make the program, disable ALL optimizations during compilation, and test with an AMD CPU to get the "baseline AMD" measurement. Change the VendorID string, and test again.
 
There is a difference between optimizing and bastardizing. Intel claims optimizing but what they are doing is bastardizing AMD by disabling features that AMD supports. If Intel made code that would cause AMD chips to crash, that would be even better for Intel to say "hey look AMD cpus don't even work". Who would get yelled at? AMD, not Intel.

Instead they are simply disabling code on AMD chips because AMD cpus work with Intel's code. There is no reverse situation that you are theorizing.

Since you know more about programming than I do (I only took one semester of java and got bored), see if you can make some code that works on Intel but not AMD.



I've never heard of this issue and my friend is a programmer, The instruction sets that everyone uses works both on Amd and Intel its Amd's design that makes the certain instruction set run slower.

By the way i find programming to be extremely boring as well. I'm a network/Hardware Guy i don't even like batch files/commands.
 
I have no issue with AMD design being slower. The thing is its only about 10% on a per-core basis. Piss poor compiling by the Intel compiler makes up for the other 40%.


http://www.legitreviews.com/article/1741/15/

If you look at the screen of the cpu usage, the game only uses 4 cores (with HT cores being not used), difference between amd and Intel, ~10%

http://www.legitreviews.com/article/1741/16/

Even at 100% cpu usage on a single core game ... ~10%

But throw in an "Intel optimized" (compiled) game and all of a sudden its 50%

OC_StarCraftII.png


Its not just a coincidence that Intel wants AMD cpus to look bad through sponsoring game devs. Its more of a question of how they are doing it and why everyone pushes to only show those games without letting anyone know Intel is responsible for making that happen.

Instead everyone wants to say "oh intel is soo much better, don't ever question why"

The reason why is what Intel fans don't want to see.
 
I have no issue with AMD design being slower. The thing is its only about 10% on a per-core basis. Piss poor compiling by the Intel compiler makes up for the other 40%.


http://www.legitreviews.com/article/1741/15/

If you look at the screen of the cpu usage, the game only uses 4 cores (with HT cores being not used), difference between amd and Intel, ~10%

http://www.legitreviews.com/article/1741/16/

Even at 100% cpu usage on a single core game ... ~10%

But throw in an "Intel optimized" (compiled) game and all of a sudden its 50%

http://media.bestofmicro.com/G/O/324600/original/OC_StarCraftII.png

Its not just a coincidence that Intel wants AMD cpus to look bad through sponsoring game devs. Its more of a question of how they are doing it and why everyone pushes to only show those games without letting anyone know Intel is responsible for making that happen.

Instead everyone wants to say "oh intel is soo much better, don't ever question why"

The reason why is what Intel fans don't want to see.

You would have a point, except for one problem: Starcraft 2 does not scale beyond 2 cores [which is SAD for an RTS, which should be easy to scale]. So Per-core IPC matters a lot more then it would if the SW scaled even slightly more. Throw in the performance hit of using a CMT core (~20%) on a BD, and you can easily explain the results. You see a perfect example of why clockspeed does not compensate for poor IPC.

I've never heard of this issue and my friend is a programmer, The instruction sets that everyone uses works both on Amd and Intel its Amd's design that makes the certain instruction set run slower.

By the way i find programming to be extremely boring as well. I'm a network/Hardware Guy i don't even like batch files/commands.

I've worked on SW going back to the 70's [JOVIAL *shudder*]. Its fun, because you logically have to think everything through. Nevermind your HW can't do anything with the SW sending I/O back and forth... :non:
 
And they all still look nice today (Doom 3 is starting to show its age IMO). Half-Life 2 seems to age better than anything else. It still looks so good!

HL2 has had a few engine updates though (multi-core rendering etc) so thats why it aged well. Of course the facial animations in it are amazing in comparison to a lot of new games even.

"Still its gotta either be overkill (they want to put the max most enthusiasts use to be safe) or it will be the next Crysis/Doom 3/ Half Life 2 (all which when they came out were not able to be maxed on current high end hardware)."

that is what we need though
we need games that cant be maxed on current hardware
it drives the hardware devs to put out more powerful tech
the past couple of years as far as required hardware has been rather low end
alot of gamers have been complaining the last couple of years that the games arent as spectactular as they used to be
there are times I bought games not just because I wanted to play them but because I wanted to see if my system could run them
I have a lower end gaming system but it has met most of the recommended system requirements for alot of games that have come out in the past two years which isnt good
gaming drives tech advancement in the home desktop market so I am glad to see a game coming out that is really pushing the limits like Max Payne 3
I hope more new games have some high end system requirements otherwise you get Nvidia and AMD just putting out rebadged GPUs for new models like AMD did with the HD 5xxx and HD 6xxx

Well the HD6K was just rebadged, they had VLIW4 vs VLIW5 and was a bit better for DX11. But still the only thing worth upgrading from a HD5870 is a HD7970/7950. The HD6 cards didn't compel much.

Theres a reason: The DX API is basically tapped out graphically. The few things you can add in now tend to be VERY computationally expensive to perform, and are subtle effects [refraction, for example]. There simply isn't much more you can do with Rasterization. Why do you think AMD/NVIDIA are spending so much time with shader based AA modes, or better Vsync implementations, or multiple-screen displays? They know this, and are trying to find ways to justify people buying newer cards.

That being said. some of the blame can be thrown on the engines; how many games run on Unreal 3?

I expect Ray Tracing to take off in a decade or so.

Way too many use Unreal 3. But that may be why. Its not super advanced but still looks decent.

look how high you have to overclock the FX-Bulldozer (all of them) just to beat out the i3-2100 and the FX-4100 still doesn't match @ 4.5GHz....
such a shame.
and the Deneb @ 4.0GHz is matching the FX-6100 and FX-8120 @ 4.5 and 4.2GHz respectively..
such a shame part two...

numbers never lie..

It also shows the inefficiency of a BD CPU vs Phenom II. That is the worst thing about it TBH is that overall it is still inefficient compared to Phenom II in most desktop applications.
 
You would have a point, except for one problem: Starcraft 2 does not scale beyond 2 cores [which is SAD for an RTS, which should be easy to scale]. So Per-core IPC matters a lot more then it would if the SW scaled even slightly more. Throw in the performance hit of using a CMT core (~20%) on a BD, and you can easily explain the results. You see a perfect example of why clockspeed does not compensate for poor IPC.
except the whole discussion. In Starcraft II, are Intel cpus using all SSE code vs no SSE support on AMD? Sure makes Intel look better to just claim poor IPC and its easy to do, you don't have to justify it.

My question is why does starcraft II scale soooooo bad on AMD cpus where stalker running at 100% cpu usage on one core is nearly identical to Intel? If anything running at 100% should show you exactly how weak an arcthitecture is.

Starcraft II does scale to 3 cores, but punishes you past that. http://www.bit-tech.net/hardware/cpus/2010/08/18/how-many-cpu-cores-does-starcraft-2-use/2

But you did hit the nail on the head as to why Phenom II sometimes outperforms BD ... 20% hit for CMT, but what happens when you adjust processor affinity on bd. Problem is youd have to do it every time you run the game, but would be interesting for testing.
 
Good point Mal.

That still leaves a 10% or so performance gap ... so If AMD can do some more work to catch up then the end result for us as consumers is a plus.

When the products are closely matched the prices tend to come down.

I don't think any of the AMD fans would argue BD needs some work.

But I agree the CPU is nowhere near as bad as some people are portraying it.

Has anyone got more info on the Trinity benchmarks please?
 
You would have a point, except for one problem: Starcraft 2 does not scale beyond 2 cores [which is SAD for an RTS, which should be easy to scale]. So Per-core IPC matters a lot more then it would if the SW scaled even slightly more. Throw in the performance hit of using a CMT core (~20%) on a BD, and you can easily explain the results. You see a perfect example of why clockspeed does not compensate for poor IPC.
I feel like you read only the second half of his post. His links to H.A.W.X. and S.T.A.L.K.E.R. showed ~10% difference. One scaled across 4 cores, another only 1. They both showed much less difference than the SCII benchmark did.
 
But you did hit the nail on the head as to why Phenom II sometimes outperforms BD ... 20% hit for CMT, but what happens when you adjust processor affinity on bd. Problem is youd have to do it every time you run the game, but would be interesting for testing.
I think everyone needs to realize that CMT doesn't make the second core 80%, it makes the two combined 180%. Seems like a common misconception among some. Running Cinebench 11.5 with affinities set to cores 1, 3, 5, and 7 gets the same score as cores 0-3, which goes against the 20% CMT hit altogether.

Changing affinities doesn't necessarily make anything run better, but it prevents some of the stutters that might come about when running a few heavy threads on the same core(s), which just makes things feel smoother.
 
Status
Not open for further replies.