AMD Piledriver rumours ... and expert conjecture

Page 75 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
We have had several requests for a sticky on AMD's yet to be released Piledriver architecture ... so here it is.

I want to make a few things clear though.

Post a question relevant to the topic, or information about the topic, or it will be deleted.

Post any negative personal comments about another user ... and they will be deleted.

Post flame baiting comments about the blue, red and green team and they will be deleted.

Enjoy ...
 
He's most prolly thinking: "ok, AMD needs faster memory and since DDR4 chips are being produced right now (eng samples, prolly), they'll most likely use it with PD".

Like gamerk said, it is very unlikely, since AMD hasn't announced anything about a new IMC for PD using DDR, even less, they haven't said anything regarding a new socket. Not that they might not be on it...

Instead of Yoda, you need the Sword of Omens, mal 😛

Cheers!

That's true but Vishera isn't due for 6months or more they're planning for quad channel DDr3 but
with Samsung producing the DDr4 they have time to change the memory controller .
though just conjecture it's still a possibility

 
People should stop getting so worked up over baseless comments.

DDR4 is so different from DDR3 in won't be in the mainstream products until 2014.

"DDR4 also anticipates a change in topology. It discards dual and triple channel approaches (used since the original first generation DDR) in favor of point-to-point where each channel in the memory controller is connected to a single module."
 
We won't be seeing DDR4 in consumer products for awhile. And while the concept is to try to get away from channels, that simply isn't possible for the PC market. When you want to put in 4GB of memory, your not soldiering chips and creating traces to your CPU socket. So what we'll get is just another name for "channels". Each slot will represent a set of data-bus's from the chips to the memory controller on the CPU. If a CPU only supports X number of chips, then the motherboard will only have that number of slots available. Still it should simplify things. Who knows, maybe memory controllers will actually mean something between different manufacturers.
 
People should stop getting so worked up over baseless comments.

DDR4 is so different from DDR3 in won't be in the mainstream products until 2014.

"DDR4 also anticipates a change in topology. It discards dual and triple channel approaches (used since the original first generation DDR) in favor of point-to-point where each channel in the memory controller is connected to a single module."

That's true but what if the engineers found that the imc is one of the sources of bd's problem
wouldn't it be an opportune time to change to ddr4 ?
Vishera isn't due until q4 or 2013 plenty of time
 
They need to fix cache latencies and branching, this has been gone over already.

And how exactly do you do that, exactly, without redesigning the architecture?

Especially the latency, as that had to have been a concious design decision. A large shared cache tends to have higher latencies, but you increase the chance of data being accessable within the cache, increasing the chance you don't hit main memory.

Meaning that even if the latencies are high, if the requested data is in the cache, the larger size may be warrented. Thus, the cache may be the right size, too big, or even too small, depending on the specifics of the architecture.

Point being, you can't just "fix" the cache latencies; theres a design tradeoff involved.
 
They need to fix cache latencies and branching, this has been gone over already.

And how exactly do you do that, exactly, without redesigning the architecture?

Especially the latency, as that had to have been a concious design decision. A large shared cache tends to have higher latencies, but you increase the chance of data being accessable within the cache, increasing the chance you don't hit main memory.

Meaning that even if the latencies are high, if the requested data is in the cache, the larger size may be warrented. Thus, the cache may be the right size, too big, or even too small, depending on the specifics of the architecture.

Point being, you can't just "fix" the cache latencies; theres a design tradeoff involved.
nice job knowing nothing yet claiming you know more than the engineers at AMD. AMD has already said piledriver will have IPC improvements.
or do you magically know something AMD doesn't?
 
nice job knowing nothing yet claiming you know more than the engineers at AMD. AMD has already said piledriver will have IPC improvements.
or do you magically know something AMD doesn't?

Actually, he probably does 😛. You might recall that AMD also said Bulldozer would have IPC improvements over Phenom II, and we all know how that one turned out..
 
Actually, he probably does 😛. You might recall that AMD also said Bulldozer would have IPC improvements over Phenom II, and we all know how that one turned out..
Yes, he probably does.
gamerk316 - silver Expert
Specialties : Audio Tech, CPUs, Graphics, Win 7
6976 messages since 1970/01/01

Congratulations gamerk316
 
intel-boards-small.jpg

the AMD / Intel dynasties set for Competition." (From 2007 to 2011)
http://www.legitreviews.com/article/1859/1/
 
They need to fix cache latencies and branching, this has been gone over already.

And how exactly do you do that, exactly, without redesigning the architecture?

Especially the latency, as that had to have been a concious design decision. A large shared cache tends to have higher latencies, but you increase the chance of data being accessable within the cache, increasing the chance you don't hit main memory.

Meaning that even if the latencies are high, if the requested data is in the cache, the larger size may be warrented. Thus, the cache may be the right size, too big, or even too small, depending on the specifics of the architecture.

Point being, you can't just "fix" the cache latencies; theres a design tradeoff involved.


As esrever said ... good job showing your lack of knowledge.

I could sit here and type out a long reply explaining the different components of a modern CPU and how they interact with each other, you wouldn't listen and it would go flying over your head. So I'll make it short. CPUs are modular, BD particularly is designed to have various pieces added and removed. Modifying internal components doesn't require a redesign of the entire thing. Branch prediction / decoding is a separate micro-unit that acts as a miniature CPU in it's own right. Improving cache latencies does not require a new architecture, both Intel and AMD have proven this many times over again.

And as I noted above, you previously tried to say that SIMD instructions didn't work well in parallel in an attempt to convince people that AMD's Fusion idea was a bad one. That one statement from you pretty much threw your entire credibility away when it comes to micro-architecture. That you actually thought that and tried to use it in an argument shows the exact level of knowledge you have. And then based on this knowledge, you then say that AMD needs to redesign not their die, nor some specific component, but the ENTIRE architecture.

Putting the above together means one of two conclusions. First being your here trying to troll Triny / BM (if he's lurking somewhere) and know that what your posting is wrong, or you don't know what your posting is wrong and actually believe it.

I know knew which is worse.
 
And AMD still hasn't used RDRAM! Crazy, huh? :pt1cable:

Come on, until AMD itself says so, it's not impossible for them to be working on DDR4 for PD, it's just very improbable. We all agree that a new socket (maybe new chipset) would not hurt them at all, but I'll just feel a lil' butt raped, lol. Plus, the APUs will be very happy to get DDR4 once they get the IMC working.

Cheers!
RDRAM wasn't an industry wide standard.

DDR3 was clearly going to be, as will DDR4, but your post showed you didn't understand this. :pfff:
 
That's true but what if the engineers found that the imc is one of the sources of bd's problem
wouldn't it be an opportune time to change to ddr4 ?
Vishera isn't due until q4 or 2013 plenty of time

No, Triny, it's too much of an architectural change and the market has shown no moves in that direction thus far and we aren't seeing any ddr4 for sale on Newegg I don't think.
 
Yes, he probably does.
gamerk316 - silver Expert
Specialties : Audio Tech, CPUs, Graphics, Win 7
6976 messages since 1970/01/01

Congratulations gamerk316


Earlier he posted,

gamerk316

Which would be a disaster performance wise.

Again, if a CPU can only handle 2 instructions per core [lets assume AMD sticks with its module approach for this discussion], what good does 400 seperate SIMD cores do? Now, instead of one strong FPU, you have 400 weaker ones, and CPU FP performance WILL suffer as a result.

Theres a reason why massive FP datasets [rendering, and more recently encoding] have moved to the GPU, but normal FP processing has remained CPU bound.

This is the same exact reason why I called Larabee DOA from the start: CPU's are good at doing one thing at a time REALLY quickly, and rapidly swapping between tasks. GPU's are good at doing lots of simple math equations at the same time, but stink royally on equations that can't make use of its many individual FPU units [shaders]. Any attempt to mix the two will lead to server performance degregation, for reasons that should be fairly obvious.

The guy may have a ton of posts, but he doesn't know squat about uArchs and ISAs.
 
RDRAM wasn't an industry wide standard.

DDR3 was clearly going to be, as will DDR4, but your post showed you didn't understand this. :pfff:

Haha, come on Chad. I just poked you with the "AMD always behind Intel" part.

Anyway, I'd say that it won't matter to Intel that AMD adopts it first TBH, except for braggin' rights. Intel is in no hurry for more bandwidth AFAIK and AMD needs to catch up in CPU anyway. The only benefit for AMD would be "testing the IMC" for the next line of APUs, or something like that.

Cheers!
 
Doesnt really matter who gets DDR4 first, like how it doesn't matter that there is no support for the best DDR3 yet.

OEM's will be using 1333 for at least another year or two, they might get up to 1600, but it is a MASSIVE jump, not sure they want to risk the obvious price hike. *sarcasm*
 
Thinking about DDR4 at this time is entirely pointless not to mention that nothing is able to use it let alone people on this forum wanting it. Last thing many of us want is our rigs to be immediately considered dated when many here are sporting SB soon to be IB. Imagine if this DDR4 had really appeared in the market around summer right when many IB models are just coming out only for people to start complaining that they don't support the latest tech yet cost an arm and a leg but not even months old.
 
That's true but what if the engineers found that the imc is one of the sources of bd's problem
wouldn't it be an opportune time to change to ddr4 ?
Vishera isn't due until q4 or 2013 plenty of time

Then they go to faster clocked DDR3, triple channel or quad channel DDR3 implementation like Intel has. They can easily get 125% more bandwidth with existing cheap mass produced technology.

There's no value in skipping to DDR4 which will be very expensive until everyone is using it already. DDR4 will start out 2-4 times more expensive, and volume shipments won't be until 2014. The last thing AMD needs is to hike the MB+CPU+RAM combo price. Maybe for the 4th generation Bulldozer (Excavator) core in 2014.
 
Doesnt really matter who gets DDR4 first, like how it doesn't matter that there is no support for the best DDR3 yet.

OEM's will be using 1333 for at least another year or two, they might get up to 1600, but it is a MASSIVE jump, not sure they want to risk the obvious price hike. *sarcasm*
Oem is trinity
Vishera is suppose to be high end they may take the bring it on attitude they have that option with Samsung ,as far as I know it's suppose to be quad channel DDR3 though .
 
No, Triny, it's too much of an architectural change and the market has shown no moves in that direction thus far and we aren't seeing any ddr4 for sale on Newegg I don't think.
How could you see it for sale ? Vishera isn't on sale until q3 - 2013 q1 Can anyone say for certain when DDR4 will be used.
 
Thinking about DDR4 at this time is entirely pointless not to mention that nothing is able to use it let alone people on this forum wanting it. Last thing many of us want is our rigs to be immediately considered dated when many here are sporting SB soon to be IB. Imagine if this DDR4 had really appeared in the market around summer right when many IB models are just coming out only for people to start complaining that they don't support the latest tech yet cost an arm and a leg but not even months old.
it stimulates conversation
 
You won't be seeing DDR4 for at least another year at earliest. Most likely 18 months, and then only in the high end systems. At ~24 months it should be cheap and plentiful. DDR4 is a pretty significant change from DDR1~3, its signaling methods and termination are different. It's going to require a new memory controller, new bus protocol and new motherboard design's.

Also, typically main memory bandwidth isn't a limiter on what the consumer market does, it's rarely a limiter on what enterprise systems do. CPU caching is actually that good these days that you'll rarely see a big difference even between large bandwidth upgrades. The only things that are super sensitive to memory bandwidth are GPU's due to their large SIMD arrays needing access to massive amounts of data. I wonder how many of the local posters knew that GPU's have L1 and L2 cache inside them along with pre-fetch engines and schedulers?
 
Oem is trinity
Vishera is suppose to be high end they may take the bring it on attitude they have that option with Samsung ,as far as I know it's suppose to be quad channel DDR3 though .
I am going to assume you missed what i wrote there. Sarcasm.

100% of the market does not have access to DDR4. Which means making cpu's support it now, when 0% of people would use it, would be an utter waste of time.
 
Considering how long it took AMD to move to DDR3 after Intel started using it, thinking PD will be using DDR4 is absolutely ludicrous.

Chad is pretty dead on with the memory. As I have said many a time before, DDR4 is speced and slated for release in 2014, per Intel who will probably start using it a year before AMD if not more.

Intel always moves to the new memory technologies upon release while AMD tends to wait till its actually affordable.

Considering the change to DDR4 from DDR3, using a different approach to how the IMC connects to it instead of multiple DIMMs per channel it will be a single module per channel, AMD might not want to wait as it may show increases that will benefit their IGP as well as server parts.

But still we have 2 years till its possibly adopted by Intel and even then probably 3-4 years until its as affordable as DDR3 currently is (8GB of nice Corsaid is about $50-$60 right now).

So no, PD wont use DDR4. It and many of the successors will use DDR3.
 
Status
Not open for further replies.