AMD Piledriver rumours ... and expert conjecture

Page 96 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
Talking about putting words into people's mouths.

I said, and you can go back and quote me, that it will never replace main memory. Caz said it would. This was in reference to desktop CPU's and possibly notebook / laptop CPUs.

And you didn't know about horizontal until I told you about it. I've been following this for quite some time now, I happen to know Samsung people.

My comment "stacked on or next to" was on 03-10-2012 at 10:35:03 AM.
Your comment on "horizontal" was on 03-13-2012 at 10:01:18 PM.

Apparently I created an Einstein-Rosen bridge to travel forward in time to learn about horizontal memory placement from you. Lol. I've been following this for years as well. It's the hottest topic in the industry. The pictures were posted as reference so others could see just how long this stuff has been worked on.

Also, I never said it would replace main memory in all market segments, at all price points. That's absurd. I was specifically talking on the trend of the lower end x86 CPUs/APUs, which is integrating further and further into a SoC.


BTW, I have an ancient desktop turned DVR/Web Server/Backup File Server I use on a daily basis that runs fine with 1GB RAM. Has 2 apache/mysql servers running for some testing. Check emails, IMs and surf the web when my main rig is doing something important (gaming).

I am looking to replace it which is why I'm interested in Trinity/Ivy i3.
 
Umm this thread is about Trinity / PD / Next AMD CPU. We're discussing everything here in the context of the desktop / laptop world of x86. There hasn't been a 32-bit x64 CPU made in years, think Via made was the last ones, possibly older Atoms.

Nearly every home PC has a 64-bit CPU, Windows 7 x64 is outselling the 32-bit OS. OEM's aren't even offering x64 versions anymore.

Office Applications, Webbrowsers, and Media encoders are all offered in 64-bit versions. The only software that hasn't made the transition is video games, strangely the segment that could benefit from it the most.

64-bit it's tomorrow, it's today. It's happening right now.

Like I said, what rock have you been living under.


Hey man your 100% right so why keep beating around the bush? We all know this, but 64bit been out for some time and nothing is really out for it you should know that man. Now lets keep it cool man. :sol:
 
Actually... People/Devs/We know the advantages of coding for 64bits since a long long time, but most consumer level applications are still stuck with 32bit libraries, which kill most of the 64bit momentum gained so far.

It's really expensive (REALLY REALLY expensive) to re-do code for systems that are no longer in the dev cycle/scope. It's like building it all over again and most companies won't leverage that cost so easily.

That's why Java and C# with most of the interpreted langs (e.g: Perl) are becoming more and more important/popular. They let you focus on your problem at hand and they handle the "backend" of it (I think it was called additional abstraction layer? don't remember XD).

Well, that's from my own experience so far (3 years worth only) of being a Dev using Java and C (yes, the original, lol) mostly.

If we take into account that moving from 16bits to 32bits took like 20 years, we'll... You can do the math of this one. First windows, full 32bits, was 2000/XP with the NT5.0 IIRC. I'm sure Palladin remembers it better, hahaha.

Cheers!


Your such a pro i agree 100%!
 
Hey man your 100% right so why keep beating around the bush? We all know this, but 64bit been out for some time and nothing is really out for it you should know that man. Now lets keep it cool man. :sol:

I was an early Windows XP 64-bit adopter, the early days sucked because nobody made drivers. Otherwise such a great experience that when Windows 7 was released I got the 64-bit version no questions asked. "64-bit" been around since the 90's when Windows NT 4.0 had a version for the DEC Alpha.

For consumers it's about software and until last year the major "must have" desktop software's weren't 64-bit. That's now changed. Its really only games that haven't released a 64-bit executable (the game resources and data don't care what ISA your executable is running). Most game producers would only need to recompile their binary's and library's to 64-bit, possible do some tweaking if they used hand-coded asm, and disable an 2GB memory limiter they used and *poof* 64-bit version done.

Hmm for audio I'm using a Yamaha RX-3900 for my home theater. Have matching stand speakers to go with it.
 
Palladin is right on the recompile issue; assuming that no 32-bit exclusive libraries are used, a recompile for 64-bit is trivial. Granted, you need to re-test everything again just to make sure nothing fell through the cracks...

For example, a good example of a "shortcut" that causes a problem in a recompile is described here: http://www.flounder.com/optimization.htm

Perhaps the best example of pure programmer stupidity in "optimizing" code occurred when I was porting a large library we used for our research project. Think of it as a 16-bit-to-32-bit port (it was an 18-bit-to-36-bit port, and the language wasn't C, but the details don't matter--you can write ghastly code in any language, and I've seen C programmers do things just as badly). The port mostly worked, but we had a really strange problem that showed up only under some rare conditions, but which crashed the program using the library. I started looking. The heap was damaged. When I found how the heap was being damaged, it was being damaged via a bad pointer which allowed a store into a random place in the heap. OK, how did that pointer get bad? Four levels of storage damage down, and after 12 solid hours of debugging, I found the real culprit. By why did it fail? Another 5 hours, and I found that the programmer who wrote the constructor code for the data structure had a struct-equivalent something like {char * p1; char * p2;} where the pointers had been 16-bit, and we now used 32-bit pointers. In looking at the initialization code, instead of seeing something like something->p1 = NULL; something->p2= NULL;, I found the moral equivalent of (*(DWORD*)&something.p1) = 0! When I confronted the programmer, he justified it by explaining that he was now able to zero out two pointers with only a single doubleword store instruction (it wasn't an x86 computer, but a mainframe), and wasn't that a clever optimization? Of course, when the pointers became 32-bit pointers, this optimization only zeroed one of the two pointers, leaving the other either NULL (most of the time), or, occasionally, pointing into the heap in an eventually destructive manner. I pointed out that this optimization happened once, at object creation time; the average application that used our library created perhaps six of these objects, and that according to the CPU data of the day before I'd spent not only 17 hours of my time but 6 hours of CPU time, and that if we fixed the bug and ran the formerly-failing program continuously, starting it up instantly after it finished, for fourteen years, the time saved by his clever hack would just about break even with the CPU time required to find and fix this piece of gratuitous nonsense. Several years later he was still doing tricks like this; some people never learn.

And yes, you still see a LOT of nonsense like this. Thats one reason why some programs are so shoddy when they first release.

I suspect the main reason you don't see many games with native 64-bit exe's are twofold:

1: DVD storage limitations: Having to store two seperate .exe's [and associated libraries] takes a LOT of disk space. And more DVD's = less profit.

2: How many games do you think are coded on engines that were designed before X64 came to the forefront? A lot of backend stuff would need to be updated to get hte software working correctly, which again leads to less profit. [Unreal 4 should mitigate this issue somewhat.]
 
You mad Bro?

I think everyone knows 64bit is the future but most(90%) of the mainstream app's still use 32bit. And their is 3 main reasons why, number 1 not everything needs to be in 64bit, And number 2 it takes to long to make a 32bit program into 64bit and the end results are barley worth it half the time, And this is the major reason why-companies are cheap and some programmers are lazy.

Not to mention 32bit can support 4GB of ram and usually only 3.25GB on windows.

I ordered a new PC (Govt) for work this week and ticked 8GB of RAM and a decent quad since i can a research job now and need the grunt with lots of windows open and a large DB running in the background for doc reference work.

The IT guys rang and told me they only want 4GB in the work machines as their systems still only support 32bit stuff ... on win7.

So yeah ... just thought I'd add a bit to support your point.

Lets not attack each other over the fact that IS infrastructure lags a bit in the real world ... at least in this case with about 20 colleges on a central WAN all over the state.

 
I pray its at least as good as the phenom per clock(PD would need to be 10-15% better per clock then the BD), And is overclockable up-to 4.8-5.2Ghz(with a heat-sink) with the mesh tech. This would be great for piledriver, Plus i hope it has great price/performance these may sound crazy but i don't think i'm asking for to much.Heck i even bet Intel can get 10-15% faster per clock with ivy and i think it will probably overclock to 5.0-5.2Ghz with a good heat-sink. Heck Sandy does 4.4-4.6 pretty easily and ivy is going to have fantastic tdp numbers. But i feel amd is being to quiet about PD like they were with BD.

Heh, that's what worries most of us.

I dunno if AMD could get the clock mesh implemented in time for PD - my guess is that it would take a year or so as the design would have to be done for GF's or TSMC's process and then tested and validated first. Maybe excavator or whatever comes after PD.. But then AMD could have been experimenting with the mesh already.

Well I've already decided to go with an IVB 3770K build in May, so I'm not waiting around for PD or Trinity. Already put off a Sandy Bridge build a year ago, thinking to wait and see how good BD would be, and I'm not making the same mistake twice..
 
Lets not attack each other over the fact that IS infrastructure lags a bit in the real world ... at least in this case with about 20 colleges on a central WAN all over the state.

Yeah, tell that to Palladin with his "crawl back under your rock" and other insults. Everybody else here in this thread follows the TOS and attacks the post, not the poster..
 
Yeah, tell that to Palladin with his "crawl back under your rock" and other insults. Everybody else here in this thread follows the TOS and attacks the post, not the poster..

From various posters within the last seven pages.

What drugs are you on? i want them because 32bit rules the land.

Isn't it funny how fanboyism can make smart people put forth dumb arguments?

A nice summary there.

PS - ignore the 'hater' comment - he calls everybody who disagrees with him a "hater" 😛.. And to think you used to be one of BD's ardent supporters a year ago, until reality set in..

Heh, make sure to bookmark Prof. Palladin's blanket statement, although this time he did state "main" memory instead of just "memory" per se, which I take as a bit of a retraction from previous posts in this thread where he stated stacking GPU memory on top of the die would never happen either..


Nearly every single one of your posts for the past few weeks has been targeted at insulting me, attempting to bait me, or just general rude behavior. Anytime someone disagrees with me you jump behind them and throw insults. It's clear you have a personal problem with me and see this forum as a means to defend whatever part of your ego I damaged.
 
From various posters within the last seven pages.

What drugs are you on? i want them because 32bit rules the land.

Isn't it funny how fanboyism can make smart people put forth dumb arguments?

A nice summary there.

PS - ignore the 'hater' comment - he calls everybody who disagrees with him a "hater" 😛.. And to think you used to be one of BD's ardent supporters a year ago, until reality set in..

Heh, make sure to bookmark Prof. Palladin's blanket statement, although this time he did state "main" memory instead of just "memory" per se, which I take as a bit of a retraction from previous posts in this thread where he stated stacking GPU memory on top of the die would never happen either..

Nearly every single one of your posts for the past few weeks has been targeted at insulting me, attempting to bait me, or just general rude behavior. Anytime someone disagrees with me you jump behind them and throw insults. It's clear you have a personal problem with me and see this forum as a means to defend whatever part of your ego I damaged.

The bolded quotes are from me, the others are not. And a quick review will show that these quotes were elicited in response to your own personal attacks, which you initiated first. As to the third-party stuff, that's what happens when you ignore posts directed to you, which you do quite often - posting a ton of stuff in what I guess is an attempt to change the subject 😛.. Many of us here go back a long ways, we have seen this behavior before and we are not so easily distracted. The best way to overcome that is to acknowledge when you are wrong and move on.. TBH, I have never seen you acknowledge being wrong - please point out one instance of that and I'll change my opinion of you, and probably others will too if that matters to you..

My ego is not nearly as delicate as yours - I'm here to learn, not pontificate and make mself out to be a know-it-all using THG as his own personal podium for discoursing on stuff whether I know what I'm talking about or not. When I pointed out - with links I might add - that BD likely had many more flaws than just the L2 cache and front-end prediction, you exploded into some anti-fanboy tirade that was entirely uncalled for. Ditto for when someone else pointed out that L3 cache has measurable benefits despite your insistence that it is nearly useless (to paraphrase). Before that, this thread was well-behaved for the most part.

And in fact, IIRC you mistook what Cazalan was saying about stacked memory as his suggesting it would be on-die, like a huge L4 cache or something. When he and I pointed out that it was stacked on top of the die, using TSV interposers, and might show up in Haswell's GPU, then again to paraphrase, you said it would not be practical for a 130-watt TDP CPU due to silicon being a poor thermal conductivity material. So again we provided some links, and mentioned that Haswell is slated to be a 55W TDP chip. I guess after consulting with your Samsung engineer buddies, you decided maybe that was OK for GPU memory but never for main memory. While I'm sure Samsung engineers are competent, I doubt they know everything there is to know about what Intel or IBM or AMD for that matter, are up to in their R&D departments.

Anyway, that's my recollection of this thread - I'm not going back over the last 30+ pages and quote you exactly however. If I have mischaracterized what you said, I apologize in advance.
 
Be promoted as Moderator doesn't mean that you have to not say things, if you have ARGUMENTS for probe that the other user is wrong, say it, but in a decent way and not braking the rules.

If you insult another user (Moderator or not) you accept the consequences of that.
 
Be promoted as Moderator doesn't mean that you have to not say things, if you have ARGUMENTS for probe that the other user is wrong, say it, but in a decent way and not braking the rules.

If you insult another user (Moderator or not) you accept the consequences of that.

Exactly. Flames & insults may be entertaining to some, but they don't promote any useful exchange of knowledge.

I used to give JayDeeJohn 'whats-for' on his AMD fanboyism quite frequently, but without the insults of course, both when he was a user and later after he became a mod. And he would unfailingly (and often incomprehensibly 😛) respond with counterpoints & links to back them up.

The insults were reserved for his sickly-green "What - Me Worry?" Alfred E. Neuman avatar 😀
 
^ PS: I note that since JimmySmitty got promoted to moderator, you have carefully refrained from calling him a fanboy or other insults 😛...

I guess you can learn from mistakes after all..


JS has always had limited mod authority, it comes with being a CR.

What really happened was after the last "AMD lies about their TDP" post, upon which I posted correct information, JS hasn't posted any biased posts.

And faze you started with the insults name calling many many pages ago over uArch. You disagreed highly with something I posted and proceed to insult me, upon which I corrected you and ever since you've had a hard on to get your e-thugery going.
 
JS has always had limited mod authority, it comes with being a CR.

What really happened was after the last "AMD lies about their TDP" post, upon which I posted correct information, JS hasn't posted any biased posts.

I've known Jimmy here on THG for at least 5 years now, maybe 6, and he has never appeared biased - he just calls it as he sees it. I'm probably much more biased than he is.

And faze you started with the insults name calling many many pages ago over uArch. You disagreed highly with something I posted and proceed to insult me, upon which I corrected you and ever since you've had a hard on to get your e-thugery going.

Sigh. That pretty much sums up your behavior here.

However, no I did not 'start with the insults'. Here's all my posts to you starting around pg. 31:

"Which is why Intel is supposedly using low-power GDDR, a gig at least: http://semiaccurate.com/2011/11/17 [...] ts-corner/ and http://semiaccurate.com/2010/12/29 [...] y-bridge/. Of course, the bit about IVB is probably wrong as I have heard later rumors that Intel plans to put GDDR on Haswell instead. "

"There has already been some rather extensive analysis done on where BD fell short - I'm too busy to look up the links again but they were mentioned in numerous threads here previously. IIRC one review site had an analysis entitled "death by a thousand paper cuts" or something similar.
The shared front-end means that effectively a BD "core" has just two integer pipelines available when both cores of the module are in use. In contrast, K10.5 had 3 pipes per core available and Core2 and later iterations have 4 pipes.
There are many other 'paper cuts' to BD's performance that ultimately cost signficant performance over what we were led to believe a year ago.. "

"Yes, I've noted the AMD slides showing 4 integer pipes per core since AMD previewed them a couple years ago. However what I said was effectively two int pipes per core, not actual number. From http://semiaccurate.com/2011/10/17 [...] -problem/:
Many reviewers disagree - there are many things suboptimal in BD, including the shared front-end design which is the basic idea behind the module approach. While it is true the cache latency and branch prediction are poorly implemented in this first iteration, the problems are more extensive than just those two areas.. "

Please point out any 'name calling' or 'insults' in those posts, or find other prior posts where I did.

However in response to the last post, this was your response:

Your spewing tons of BS about "effective" Integer cores (WTF is effective supports to mean, is that a concept you created in your own head with next to no engineering experience). Four pipelines per core meaning eight per module, far more then two.

You also fail to understand how instruction queing and decoding work. That's an entire book in and of itself and far beyond the scope of this discussion. Safe to say, your horribly wrong if you think that four schedulers couldn't keep two integer cores and two 128-bit FPU's busy as part of a superscaler design. Eventually your realize there are internal schedulers to each core, and then your going to ponder what those schedulers are for and the difference between the ones outside and the ones inside.

Pretty much your just looking for reasons to hate on a product, this is the kind of blind hate that I can't stand to see spread, even on enthusiast websites.

Case closed.
 
Stable OC, To me that's all that matters. What cooler are you using i'm only doing the 212+.
my initial setup was with a delta V3 with low speed fans, max temp was 49C after 6 hrs of p95. I have upgraded the loop when I added dual wc video cards, max temp is 35C at the same speed with the raystorm. Haven't adjusted anything cpu wise for about 2 months or better.
 
fud...
Dual core Trinity in Q3 2012
http://www.fudzilla.com/home/item/26383-dual-core-trinity-in-q3-2012
Trinity A85X chipset ready
http://www.fudzilla.com/home/item/26382-trinity-a85x-chipset-ready

The real question I have had is if Trinity will work on FM1. I am guesing no since there is a chipset dedicated to it, which would honestly make sense considering the GPU changes as well does the CPU.

JS has always had limited mod authority, it comes with being a CR.

What really happened was after the last "AMD lies about their TDP" post, upon which I posted correct information, JS hasn't posted any biased posts.

And faze you started with the insults name calling many many pages ago over uArch. You disagreed highly with something I posted and proceed to insult me, upon which I corrected you and ever since you've had a hard on to get your e-thugery going.

I never said they "lie" about TDP. I just mixed up their server, which they do use ACP, with DT. But they could use it if they so chose to which would give it a lower power rating.
 
It doesn't work on FM1 socket but the older chipset with a new FM2 socket will work.

So its like AM3+. People got mobos that were "AM3" but in reality a AM3+ socket with a older AM3 chipset (7/8 series).

That kinda sucks for those who first adopted Llano. Good to know for work since we will have to move our systems to it when it comes out.

A tad off topic but this is interesting:

http://www.hkepc.com/7672/page/6#view

If those benchmarks are true then my hopes of a sub $400 HD7970 might come true soon after Kepler hits. It looks like the GTX680 will perform better and have overall lower power consumption.

At least AMD got a few months to milk the cash cow. Hope it helps R&D.

Also, one last thing. I am a mod now. Sure it means I have power, lots more. But it also means I have a lot more responsibility. I don't mind a bit of side talk here and there or debate. Its healthy to debate ideas and thoughts, that is until we see the truth.

But lets keep it civil here. Thats means everyone, me included. I have known a few of you guys for a long time, since I started here. And others are new but everyone can discuss without flaming or insults.
 
The real question I have had is if Trinity will work on FM1. I am guesing no since there is a chipset dedicated to it, which would honestly make sense considering the GPU changes as well does the CPU.
i am confused about that as well. seems like amd is saying that trinity will work on both llano fch (a55, a75) but may not work on socket fm1. trinity might not work on a mobo with a75 chipset and socket fm1 but might work on a mobo with socket fm2 and a75 chipset.
some other things consistenly missing in trinity related 'leaks and rumors' are - pcie 3.0 support. wikipedia(not reliable enough) says trinity will support pcie 2.0. pcie 2.0 also means trinity might hold back performance in dual or multi gpu cfx/sli since newer cards are pcie 3.0 ones. and sub $100 amd cards will be rebadged pcie 2.0 cards so that they can hybrid cfx with trinity.
trinity may also support amd's full instruction set - which does not make much sense for an entry level apu. amd-v is okay but aes, xop, fma4... i don't know much about these but they sound like high end. if trinity does support these, might add a lot of value at that price range.
if trinity supports amd's full instruction set then what's left for piledriver cpu? pd might support pcie 3.0 if amd supports it with pd-specific chipsets.
 
Status
Not open for further replies.