AMD Piledriver rumours ... and expert conjecture

Page 142 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
I feel like you read only the second half of his post. His links to H.A.W.X. and S.T.A.L.K.E.R. showed ~10% difference. One scaled across 4 cores, another only 1. They both showed much less difference than the SCII benchmark did.

SCII is much more CPU intensive than HAX or STalker though. HAWX is a flight simulation but with very little in terms of AI and Stalker is a FPS, both will rely on the GPU before the CPU which is normal for the majority of games and wont show a major difference between archs until one of the CPUs becomes a bottleneck (in that case a new GPU being bottlenecked by one CPU wont show the same performance gains as a CPU that is not a bottleneck for the GPU).

SCII on the other hand has a lot of AI always working on a map and if you remember the Zerg rushes, it can get pretty intense. Its just like C&C, Age of Empires or any other RTS style game. They tend to have a massive amount of units, all able to be unique and a more powerfull CPU tends to benefit that aspect.

I could compare Crysis but Crysis again is a FPS and that was more limited by GPUs than anything as it didn't scale beyond 2 cores (would not run on one because it used the second core for sound but that was it). SCII was known to be a major CPU game from the start.

Hell with the MP3 requirements I wonder how well they got their game engine to scale with cores to recommend that.

I think everyone needs to realize that CMT doesn't make the second core 80%, it makes the two combined 180%. Seems like a common misconception among some. Running Cinebench 11.5 with affinities set to cores 1, 3, 5, and 7 gets the same score as cores 0-3, which goes against the 20% CMT hit altogether.

Changing affinities doesn't necessarily make anything run better, but it prevents some of the stutters that might come about when running a few heavy threads on the same core(s), which just makes things feel smoother.

In reality it is 180% of a dual core CPU but the second core tends to run that way when both are used at the same time, not when you use one or the other.

The method you described means that the cores, be it the first or second of a module, do not have to share any resources. The second core of the module goes into a "inactive" state and therefore it allows the other core to utilize all the resources available in that module meaning the FPU, L2 and L3 which is all shared by the module.

But instead use cores 0, 1, 2 and 3 which should be the first two modules and you will get a slow down compared to using cores 0, 2, 4 and 6 or 1, 3, 5 and 7.

Thats the design purpose. To allow for almost the same performance as two cores while using less die space. Unfortunatley it does that but it does not do it well enough to justify a full range (low end to high end i.e. $100-$1K) price range to fully compete with Intel.
 
As far as I'm aware, the CLA with Intel gives both companies the right to use the others architecture. To my knowledge, there is nothing anywhere in the CLA that gurantees AMD the right to use any of Intel's additional specifications [MMX, SSE, AVX, etc].

Not entirety correct.

The CLA doesn't grant AMD rights to the x86 extensions, they already have those. Those rights were granted in the 90's in a court case in CA.

Thus from the 90's until the sun explodes, AMD has legal rights to implement every single Intel created extensions of the x86 ISA in their x86 CPUs. This includes MMX, SSE2/3/4, AVX, and anything else Intel creates for the x86 CPU, there is even a legal argument that AES-NI and QS might both be available for AMD to implement (instruction wise). When Intel created the IA64 platform one of the driving reasons was because they owned the entire ISA and were under no legal requirement to share it. That didn't work out quite like they wanted it do and has been dropped for the most part.

The CLA was a court ordered settlement between the two in response to the complaints from AMD / HP / Dell / ect.. that Intel was engaging in anti-competitive business practices. The courts ordered the two to settle their dispute or else the court would do it for them. Both sides agreed to drop any and all litigation if certain concessions were made. First being that Intel got rights to the AMD64 64-bit extension of the x86 ISA. Second being that Intel affirmed that AMD has rights to all Intel created extensions of the ISA. Then Intel also agreed to cease special "exclusive deals" for being the exclusive provider to OEMs and to remove the "artificial performance impairments" from their compiler with respect to AMD CPUs.

The FTC did their own investigation during this time and basically said the same thing, that Intel had to stop providing exclusive deals and "artificial performance impairments". The CLA only involves AMD and Intel, thus the FTC's actions would force Intel to apply those provisions to any and all x86 CPU's ever made (Via for now). It was also to ensure that Intel never acquired a monopoly on the consumer desktop market or commodity server market.
 
I have no issue with AMD design being slower. The thing is its only about 10% on a per-core basis. Piss poor compiling by the Intel compiler makes up for the other 40%.


http://www.legitreviews.com/article/1741/15/

If you look at the screen of the cpu usage, the game only uses 4 cores (with HT cores being not used), difference between amd and Intel, ~10%

http://www.legitreviews.com/article/1741/16/

Even at 100% cpu usage on a single core game ... ~10%

But throw in an "Intel optimized" (compiled) game and all of a sudden its 50%

http://media.bestofmicro.com/G/O/324600/original/OC_StarCraftII.png

Its not just a coincidence that Intel wants AMD cpus to look bad through sponsoring game devs. Its more of a question of how they are doing it and why everyone pushes to only show those games without letting anyone know Intel is responsible for making that happen.

Instead everyone wants to say "oh intel is soo much better, don't ever question why"

The reason why is what Intel fans don't want to see.


Man i don't want to get banned but this is the dumbest thing i've seen in awhile not only is the FX overclocked higher(12%) its also slower not to mention its a game to begin with.
 
I've worked on SW going back to the 70's [JOVIAL *shudder*]. Its fun, because you logically have to think everything through. Nevermind your HW can't do anything with the SW sending I/O back and forth... :non:


Hey i understand that, But it doesn't change much, I'm happy their is programmers out their, now i don't have to learn it! :kaola:
 
I think everyone needs to realize that CMT doesn't make the second core 80%, it makes the two combined 180%. Seems like a common misconception among some. Running Cinebench 11.5 with affinities set to cores 1, 3, 5, and 7 gets the same score as cores 0-3, which goes against the 20% CMT hit altogether.

Changing affinities doesn't necessarily make anything run better, but it prevents some of the stutters that might come about when running a few heavy threads on the same core(s), which just makes things feel smoother.
Actually amd engineers are thes that stated thea a module performs as 80% of a dual core cpu. http://techreport.com/articles.x/19514

The problems are the front end isn't strong enough to thread 2 cores, instead its only 1.5 as strong as phenom II. 2nd major plague was how stupidly slow the cache memory is.

Separating the alu and fpu worked fine and can take their design to the next phase. They just need to fix the initial design flaw.

As for core scaling not caring wich cores to use. It very much does. http://www.pcper.com/reviews/Processors/AMD-FX-Processor-Review-Can-Bulldozer-Unearth-AMD-Victory/FX-versus-Phenom-Core-0

 
Man i don't want to get banned but this is the dumbest thing i've seen in awhile not only is the FX overclocked higher(12%) its also slower not to mention its a game to begin with.
Maybe you shoud have kept up with the discussion on software disabling functions on AMD cpus when they are "optimized for Intel". ScII is heavily under the influence of Intel.
 
I feel like you read only the second half of his post. His links to H.A.W.X. and S.T.A.L.K.E.R. showed ~10% difference. One scaled across 4 cores, another only 1. They both showed much less difference than the SCII benchmark did.

Sorry but gaming is not the only thing that matters(and is Bottleneck by the GPU 90% of the time so should be overlooked), heck the whole reason i wanted a faster processor was for Encoding/rendering and the BD fails at that if you ask me(per dollar at least at launch), And for gaming its usually slower then the x4 965(120$).

The prices are not that bad anymore with the 8150 setting at 219.99$ with a 15$ promo code(and the 8120 at 174.99 which is pretty good), at that price its cheaper then the I5 2500K and is better in some things but worse at 70% of things But not by much, Plus a upgrade path to Piledriver as well. If you already have A Phenom II x4 or X6 it's by no means a upgrade but more of a replacement. With the money saved buying a 8120 over the 2500K it's around 35$ and that can be bought into buying a 212+. which means you can get good enough gaming performance with maybe slightly better multithreading performance over the I5(around 10% while OC) but with around 20%+ less performance per core(While OC and the I5 is not OC).

I'm really hoping when PD releases Amd prices them right this time we all know its not going to beat Intel's I5's and I7's so why price it as high? I'm hoping it can beat Intel's first gen I7's though(overall performance not 1 thing out of 100) and i don't think that's asking to much.


We will have a big guess on PD performance when Trinity comes out in may(i here the 15th). Just add maybe 5-10% more performance(PD will have L3 cache) and that will be PD. I'm pretty sure PD will be a bigger boost(from BD) then Sandy-Ivy is, Who would even upgrade to Ivy if they already have a I5 2500K or I7 2600K it would make no since unless TDP matters that much. They would see a bigger benefit from buying a new video card or a SSD.
 
Maybe you shoud have kept up with the discussion on software disabling functions on AMD cpus when they are "optimized for Intel". ScII is heavily under the influence of Intel.

As I said, SCII is a heavily CPU dependant game, all RTSes are.

And maybe we should cut the "intel optimized" talk again as it tends to go on, and on and is overall pointless.

And jdwii, anyone with SB needs not to upgrade to IB. Only to Haswel or AMDs equivalent, probably PD, as IB is not really meant for a majore performance jump from SB.
 
As I said, SCII is a heavily CPU dependant game, all RTSes are.

And maybe we should cut the "intel optimized" talk again as it tends to go on, and on and is overall pointless.

And jdwii, anyone with SB needs not to upgrade to IB. Only to Haswel or AMDs equivalent, probably PD, as IB is not really meant for a majore performance jump from SB.
That's the only excuse anyone comes up with for an Intel optimized game. Oh its cpu intensive, amd just sucks. It has nothing to do with amd cpus beig fed total generic 386 code. Code can't possibly slow a computer down.
 
Just leave...
If we want to listen to an argument, Jerry Springer or Jeremy Kyle are good programs, and we will get some entertainment from it.

You sound like a 3 year old brat, who is trying to justify his/her purchase.
We do not care, and as many have said already, we know the score regarding SB/FX and it will not ever change.
Ill actualy give you a reason this time to go call your little moderator friend.

Recon, go *** yourself you self centered prick. Anyone that can go and call an op a blatent idiot o and have the mod select your answer as the best solution obviously has some inside contact. Straight up F U.
 
I still haven't read anyone from the "Intel side" admitting that they practically stole money from their pockets when they were cutting AMD's pie from the game. They just cut one of the few tangible freedoms of the market: freedom of choice (when you actually want to buy).

And recon-uk: sarcasm and irony in a post is also a form of insult. More subtle to the eye, but insult at the end. In your case, telling someone to leave just because you don't like his argument or don't understand it, doesn't give you the right to tell him/her to leave; that was also very childish from you. If you won't contribute with a counter-argument to the sub-topic (or whatever it's called), then just move along and ignore or state something without pushing your opinion as a fact to the other part.

And like I said... AMD would do the same in Intel shoes; all the higher ups come from the same schools (as in "how the do things"): the bottom line justifies the means, lol.

Cheers!
 
I would not call it "crap" so blatantly, because the FTC did in fact found Intel guilty.

Problem is that, like I said, people don't realize when a company (no matter the color or name) starts doing/pushing bad practices (in Intel's case, abusing dominant position) in the market, they start messing around with the hard earned money of the "common folk". And trust me, you and I (and like 99.9% of the forum) is under the "common folk" tag. Those practices must NOT be forgotten; maybe forgiven in favor of "moving along", but never forgotten or they'll do it again (bottom line, remember; screw the people). It's our duty as consumers to stop those behaviors and spread the word about it or companies will keep on doing it.

Anyway, you might think of it as a "mantra" that needs to be spread or information for someone that doesn't have details or just information, but just telling him to stop saying it just because you're tired of reading it... Maybe the good call is to just ignore it if you already understand and know it.

Cheers!
 
Ill actualy give you a reason this time to go call your little moderator friend.

Recon, go *** yourself you self centered prick. Anyone that can go and call an op a blatent idiot o and have the mod select your answer as the best solution obviously has some inside contact. Straight up F U.

Wow to think Chad and myself was called out for telling you to zip it and this is how you repay Reynod.
 
http://www.tomshardware.com/forum/forum2.php?config=tomshardwareus.inc&cat=28&post=330780&page=3&p=1&sondage=0&owntopic=1&trash=0&trash_post=0&print=0&numreponse=0&quote_only=0&new=0&nojs=0

The first one isn't on my list. Mod cleaned this one up pretty much. I do reme ber you going off on me for using the term fanboy and got mad when I pointed out in your own avatar stating fanboy.

I don't go around every intel thread telling people they are idiots, but some people feel the need to inform fx owners they are stupid even without actually seeing what the chip can do first hand. There are reasons certain programs don't run well on any amd chip, not just BD. If you don't want to hear it, don't look.

If I can still say anything later, ill link the thread where you called me an idiot several times because I refused to look at things the way you do. I'm not you, I will never see things the same way.
 
That's the only excuse anyone comes up with for an Intel optimized game. Oh its cpu intensive, amd just sucks. It has nothing to do with amd cpus beig fed total generic 386 code. Code can't possibly slow a computer down.

I never said AMD was crap, I just stated a well known fact that RTS games depend more on the CPu than they do on the GPU unlike most games which rely on the GPU more than the CPU, thats why SCII shows such differences mainly due to , again, all the AI thats being calculated in realt time hence the naming of Real Time Strategy. Most FPS have predetermined AI which lightens to load on the CPU a lot.

I am sure if AMD makes a stronger arch, it will do well in RTS games as well.

Whats funny is that if AMD was the top dog again, thsi Intel optimized issue wouldn't ever come up.

Still it needs to be let go.

Ill actualy give you a reason this time to go call your little moderator friend.

Recon, go *** yourself you self centered prick. Anyone that can go and call an op a blatent idiot o and have the mod select your answer as the best solution obviously has some inside contact. Straight up F U.

No need to fly off the handle like that. You know the rules as do we all.

I would not call it "crap" so blatantly, because the FTC did in fact found Intel guilty.

Problem is that, like I said, people don't realize when a company (no matter the color or name) starts doing/pushing bad practices (in Intel's case, abusing dominant position) in the market, they start messing around with the hard earned money of the "common folk". And trust me, you and I (and like 99.9% of the forum) is under the "common folk" tag. Those practices must NOT be forgotten; maybe forgiven in favor of "moving along", but never forgotten or they'll do it again (bottom line, remember; screw the people). It's our duty as consumers to stop those behaviors and spread the word about it or companies will keep on doing it.

Anyway, you might think of it as a "mantra" that needs to be spread or information for someone that doesn't have details or just information, but just telling him to stop saying it just because you're tired of reading it... Maybe the good call is to just ignore it if you already understand and know it.

Cheers!

I have heard Intel had the stuff to fight it but still, it is over and should stay that way. AMD and Intel are done with it, lets move forward and keep it out of this thread as much as possible.
 
Man i don't want to get banned but this is the dumbest thing i've seen in awhile not only is the FX overclocked higher(12%) its also slower not to mention its a game to begin with.

I'm not at the level most people reading this stuff are, but to complain the FX has higher clocks isn't really fair. It's like saying comparing any two chips with different stock clocks is unfair, no? The FX comes at a lower stock clock, but I thought its turbo can push it to higher at stock than the 2500k. Perfectly fair I would have thought. I think we should be comparing what chips can do against each other, not what they can do against each other at a certain clock speed. The FX has twice as many cores, but people don't complain about that being unfair. Should be compared bang for buck I think.
 
http://www.overclock.net/t/1210060/fx8120-vs-2500k-benchmark-results
this dude bench almost everything:

AMD System Specs:
cpu: FX-8120 @ 4.5ghz
mobo: Asrock 990FX Fatal1ty
ram: 4gb G.Skill 1600 mhz DDR3
psu: Thermaltake Toughpower 1000 watt
video card: HD6950 2gb

Intel system specs:
cpu: i5 2500k @ 4.4ghz
mobo: Asrock Z68 Extreme3 Gen3
ram: 4gb G.Skill 1600 mhz DDR3
psu: Thermaltake Toughpower 1000 watt
video card: HD6950 2gb


Pretty good comparison ... look at the memory read / write scores ... that is where the Intel CPU gets its grunt from to shine in some of those benchies.

A very good report that one.
 
Status
Not open for further replies.