AMD Piledriver rumours ... and expert conjecture

Page 207 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
There is no justifiable reason to believe it won't improve Piledriver's results vs Trinity's, though.

The main point was that it is perfectly reasonable to expect a ~10% increase in performance (especially in gaming) because of Piledriver's added L3. This is not the same as expecting a "BIOS fix" or "microcode update" to completely change the leaked benchmarks of a brand new architecture, such as we saw back at Bulldozer's impending launch.

I don't know much about the AMD line but "improve Piledriver's results vs Trinity's" ?
Doesn't trinity a codename for the piledriver (enhanced bulldozer) core in the mobile world?

Excluding this... I do agree that it's reasonable to expect a 10% increase in performance if you add a L3 cache into the cpu module (sadly not in all softwares)
 
Translation:

I don't see why people all say bulldozers a fail. It's based an old architecture. AMD bought rights to the company they got it from liquidised 10 years back. They used to make expensive processors that were hand made every silicon bit and all that. So you can see why they were expensive, but they were insane. However, as it's such an old architecture, AMD have yet to bring it to today's top standards. But for a first run, they have done well. I have the FX-6100 and it's much better then reviewers say they're. So corrupt these days, they all get paid to say what they say now. My 6100 does properly 15-20fps better in games then what the reviewers say.



?? That bit was translated but still doesn't sound right.


Sorry i rushed when i typed what i meant was, amd bought the rights to an archetecture, the company who designed it liquidised 10 years back, now amd are using it.
 
http://www.obr-hardware.com/2012/07/knock-knock-is-anybody-else-there-amd.html

Some facts: AMD Trinity has serious design bug in current revision, error will not affect only lower-performance models with low frequencies (laptops). Desktop FM2 version in A1 revision are buggy as hell. FX2 based on Piledriver core is delayed to december/january 2013. Radeon HD 7970 GHz Edition is still only paper-launched. New Radeons HD 7950/7990 are delayed again. Look at stocks, lowest price now for the last several years.

this guy has been spot-on on inside information for some time now.

From the front-page news stories: http://www.tomshardware.com/news/Piledriver-Athlon_II-X4-FM2-Trinity,16452.html

If these PD Athlons are indeed disabled GPU Trinity processors, then it seems GF is up to its old tricks with poor yields - same as what caused Llano to get delayed nearly a year, as GF had a lot of trouble fabbing the CPU + GPU on one die. There has to be some problem with GF's design rules or process causing this, if true. Maybe there's some difference between the CPU logic transistor design and the GPU transistor design? IIRC Intel's SB has several different transistor design parameters depending on the purpose.
 
I don't know much about the AMD line but "improve Piledriver's results vs Trinity's" ?
Doesn't trinity a codename for the piledriver (enhanced bulldozer) core in the mobile world?

Excluding this... I do agree that it's reasonable to expect a 10% increase in performance if you add a L3 cache into the cpu module (sadly not in all softwares)

You're right -- I used 'Piledriver' (architecture) where I should have used 'Vishera' (the actual codename for the desktop product). Thank you!

But you also reminded me of something that we should consider when discussing Trinity versus Vishera: Trinity is a Piledriver design (or "early revision") whereas Vishera should be a Piledriver+ (or "late revision").

It seems that from now on AMD will make a distinction between two stages (or revisions) of the same architecture, whereby the second stage might have further architectural changes and/or more features than the first. I read a couple of articles about this just a few days ago, and will update the thread once I find the links.
 
Somewhat true, AMD could have also just copied Intel's cpu itself, or maybe Intel's logical cores are already coded into windows to begin with, (except for the windows XP without the HT hotfix) and since AMD likely couldn't infringe on whatever patent is out there, they had to wait till MS coded their "3/4 core"

The thing is you can't expect AMD to be allowed to ride on Intel's software/hardware implementation. I would venture to bet that AMD had to do it the way it happened. Just more fuel to the marketing blunder.

SMT has been around LONG before Intel used HTT.
 
IIRC the original Bulldozer thread got over 200 page before it got locked & then deleted 😛..

Oh the times. Passion, tension, and anger galore. Probably one of the "best" threads from here. There was also that old golden egg from the "will-my-PC-turn-on-again-if-I-buy-another-power-cable" guy -- does anyone still remember it and have the link?
 
http://www.obr-hardware.com/2012/07/knock-knock-is-anybody-else-there-amd.html

Some facts: AMD Trinity has serious design bug in current revision, error will not affect only lower-performance models with low frequencies (laptops). Desktop FM2 version in A1 revision are buggy as hell. FX2 based on Piledriver core is delayed to december/january 2013. Radeon HD 7970 GHz Edition is still only paper-launched. New Radeons HD 7950/7990 are delayed again. Look at stocks, lowest price now for the last several years.

this guy has been spot-on on inside information for some time now.

That guy spread quite a *lot* of FUD before Bulldozer's launch; his posts about AMD products are always some sort of amateurish rant. There might be some truth in his latest post, but he doesn't care to elaborate either on Trinity's "serious design bug" -- reviewers everywhere seem to have missed it, but it might come up later -- or on what are his sources for all of this information. I would call FUD on the delay of "FX2 based on Piledrver" (I take it as a reference to Vishera).
 
If these PD Athlons are indeed disabled GPU Trinity processors, then it seems GF is up to its old tricks with poor yields - same as what caused Llano to get delayed nearly a year, as GF had a lot of trouble fabbing the CPU + GPU on one die. There has to be some problem with GF's design rules or process causing this, if true. Maybe there's some difference between the CPU logic transistor design and the GPU transistor design? IIRC Intel's SB has several different transistor design parameters depending on the purpose.

Except there really isn't a reason for poor yields this late in 32nm production. That points to a design problem. This is probably why AMD had to guide down their projections for the next quarters as well. Trinity should have a huge advantage over Intel right now on the low end but it's not materializing. Next year it's going to get worse fighting Haswell.

Lucky for AMD they have several console design wins, but the margins there aren't that high.
 
SMT has been around LONG before Intel used HTT.
I can't find the link for the windows XP HT patch right now, its somewhere about 20+ pages back. Thing is Intel had to patch windows XP to PROPERLY schedule the HT core. It wasn't an automatic thing. IT WAS A PATCH that happened 10 years ago just like AMD. Without running the patch, windows XP Saw the P4D as a full dual cpu system and ran it as just that.

There is a distinct difference between HT and SMP (Symmetric Multi-Processing–multiple CPUs on a mainboard). SMP is chipset-dependent and requires a coordinated effort by many factors off the CPU itself (mostly for cache coherency). This is one of the main reasons Microsoft has been “complaining” lately about implementing HT in its Windows NT-based operating systems (NT, 2K, and XP). These versions of Windows use something called a HAL (Hardware Abstraction Layer) which logically isolates everything a PC can do from the PC itself.
...
It is ironic, actually, as just about the time Microsoft releases their greatest operating system (Windows XP) the very HAL it's designed on will have to be modified extensively to accommodate HT. Interesting.
http://www.geek.com/articles/chips/hyperthreading-implementation-in-windows-xp-20020110/

thats right, the "Logical cores" had to be coded by windows to be treated as logical cores and not a separate cpu. All too often things that happened a long time ago, people tend to forget or maybe didn't know about in the first place.

Funny reading some of the comments back then

If H.T. were to work in some sophisticated way OTHER than “simulating SMT”, then every major operating system and multithreaded piece of software would need to be rewritten to optimize scheduling of the microthreads. Not a great way to get market acceptance.

sound familiar?

by the way, HTT is AMDs hyper-transport technology, HT is hyper-threading
 
 
 
Thanks for getting the thread back to working status, mods.

And regarding the L3 cache... Problem is, in the BD design they added increased latency when going to each cache against the K10 design, so the same improvements found in K10 won't be the same as in the BD/PD design (unless they reduced the latency somewhat). That's why I don't trust that 10% improvement claim.

Besides, like palladin says, you really don't want to be in L3 land...

Cheers!
 
 
^
triny - that's for YOU.

Heh, haven't seen him around in months - maybe the repo man came in the middle of the night and took both his computers 😛..

AMD stock seems pretty much stuck around the $4 mark - given their poor guidance for this quarter my guess is that it'll float at that level for the next few months, maybe creep up a bit. AMD and their OEMs really needs to release PD and desktop Trinity right now in order to take advantage of the back-to-school buying spree.

Sometimes I wonder if GloFlo purposely sabotages AMD a bit, maybe hoping AMD will try to find another foundry that'll use the SOI wafers AMD CPUs require. IIRC they are back to the original agreement where AMD pays for each wafer, whether good or bad, as the yield agreement in effect last year expired some time ago. AFAIK AMD is the only one requiring SOI and GF is the only foundry equipped to fab SOI in volume.
 
Oh the times. Passion, tension, and anger galore. Probably one of the "best" threads from here. There was also that old golden egg from the "will-my-PC-turn-on-again-if-I-buy-another-power-cable" guy -- does anyone still remember it and have the link?

LOL - I remember that thread - some 60+ pages all because some Canadian guy didn't wanna spend $5 on a power cable for his "very old computer not working" or whatever the thread title was 😛. IIRC some 5 years ago. That guy gave Canadians everywhere a bad rep.

Come to think of it, "Triny" is Canadian too - wonder if there's a connection somewhere 😀..

j/k
 
Except there really isn't a reason for poor yields this late in 32nm production. That points to a design problem. This is probably why AMD had to guide down their projections for the next quarters as well. Trinity should have a huge advantage over Intel right now on the low end but it's not materializing. Next year it's going to get worse fighting Haswell.

Lucky for AMD they have several console design wins, but the margins there aren't that high.

If Haswell's GPU lives up anywhere near the hype, it'll be interesting to see the reaction in this thread (assuming it is not defunct by then). My guess is that we'll see some hints this fall. Intel has never been as good at keeping info under wrap and not leaked, as AMD has..

The part of Read's conference call where he pointed to a mis-match between LLano production and chipset/mobo production just made me scratch my head - how could they not notice something as fundamental as that? There's more to the story IMHO that they're not telling us or the investors.
 
dude, I'll take me 980BE with L3 cache and you can run whatever other AMD chip without L3 cache.
wanna bench.?
😛

Uhm... You kinda missed my point, bro.

I'm not debating whether L3 helps or not. I'm debating the "L3 in PD will be worth a 10% improvement like K10 shows" statement, which I believe is inflated. Trinity doesn't have L3, but Vishera will. Extrapolating the findings in Stars vs Thuban or Deneb's L3 won't do IMO.

Cheers!
 
+
dude, I'll take me 980BE with L3 cache and you can run whatever other AMD chip without L3 cache.
wanna bench.?
😛


Well Piledriver with no L3 cache is faster then Bulldozer with L3 cache in IPC so lets bench those two. :kaola:


I take it the same way as clock speed vs performance its only accurate within the same family of Processors, For a example a I7 ivy with no L3 cache will kill a Phenom with L3 cache.

L3 cache only makes a 5-10% difference, Within the same family on 90+% of apps.
 
Status
Not open for further replies.