Skylake Xeon Platforms Spotted, Purley Makes A Quiet Splash At Computex

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


Samer, you are being unreasonable.

The Majority does not apply here. These are servers. These go into $50,000+ systems. Not $499 Best Buy ones. They will be different. Also, HBM is NOT cheap, that's the reason AMD/Nvidia isn't using it on their <$400 cards. Do you think Intel will put that on their $150 CPUs?

32-128GB is too small. Purley supports 6 TB, yes 6 Terabytes of memory per socket. Each HBM2 chip supports 8GB. You'd need 16 to make it puny 128GB. The pre-Purley platform can support 1.5TB of memory per socket. 1.5TB = 192 8-Hi(highest density) HBM2 chips. Now tell me those are cheap.

Plus you can't upgrade them.
 
I'm thinking quite a few of those pins will be redundant in a single cpu setup. Probably not used except in a dual cpu configuration where communication between cpus will have to deal with the much larger than normal ram and bandwidth. But that's just a guess
 
You're speaking on an open forum. You're speaking to everyone reading it. If you intend to speak specifically to one company or another, you're going about it the wrong way.

That doesn't bother me. I'm not trying to force you or anyone else to value my opinion. However, demonstrable facts and logic should be valuable to everyone. Thus far, you're ignoring almost all of them from everyone else.

And what qualifies you to make that claim? What great insight do you have into me, my knowledge, and my experience? Or even everyone else on this forum telling you otherwise? To say this to another means you consider yourself more an expert. So then, let me flip this on you. Why should I or anyone else consider you to be the expert on the matter when nearly everything you say is extremely controversial to say the least?

No indeed, you haven't replied to any of them, nor any point someone else has countered you on. As David said, you're being completely unreasonable in continuing on this line of thought without addressing the many valid issues brought up against it.

Yep, they already make a lot. And if they were to add HBM options, that would increase the SKU number even more. And now you're adding that mboard manufacturers should add even more products to their portfolios: those with DIMM slots and those without. Even more cost added to the products due to increased manufacturing costs.

Please, cite this proof you have that the majority of these types of consumers will be served well by all these proposed changes of yours. I'll wait.

So now in addition to CPUs and microarchitecture, you're claiming to be an expert on material and fabrication costs?

Again, why should they be? You're claiming to be more clever, more insightful, and all around smarter than their entire R&D divisions.
 
That's a really good point! Currently, I think Socket-2011 maxes out at 2 QPI channels and is limited to dual-CPU configs, but the article says
The platform will support both E5 and E7 processors and scale to 2, 4 and 8-socket implementations. ... There is also the notable mention of a new 2- and 3-channel UPI interconnect
Up to 3 channel means the pins must exist for 3 UPI links. I'm guessing they intend to use a cube topology to support 8-CPU configs. I wonder if 4-CPU configs will use the 3rd link for point-to-point topology, or if 2-CPU configs will use more than 1 link for added bandwidth.
 
Intel's experts aren't reading this. Why would they?

They know far more about these products and the underlying technology than any of us. Do you really think they go home and pour over comments on tech websites, at night? Do you think we'd have any ideas they haven't already considered and analyzed?

If they do any work-related reading, after hours, it's probably in semiconductor research journals and various market & macro-economic reports.

Yes, so why waste your time?

If someone exhibits a willingness to learn, then it's worth taking the time and trouble to explain. But I'm not sure Samer is very interested in learning from anything that any of us have to say.
 
The extra pins are likely for more robust power delivery. Not only TDP is going to 160W with the Xeons, the Xeon Phi chips use the same pin count socket. Top TDP for Xeon Phi will be at 215W.
 
Ah, didn't know that.

In summary, we've amassed the following possible reasons for the additional pins:

  • ■ Upgrading from 4 to 6 memory channels.
    ■ Replacing 2 QPI links with 3 UPI links.
    ■ 8 more PCIe lanes (up from 40).
    ■ OmniPath interconnect.
    ■ Increasing supported TDP from 165 W to 215 W.

Did I miss anything?

I'm already starting to get excited for my next build. I sure hope they continue the E5-16xx series (or equivalent), in this socket. I'd like to stay in the range of $300 - $500, for my next CPU.

The only thing possibly lacking is PCIe 4.0. Not that I need it. Just would've made my build a bit more future-proof (PCIe was a big selling point of Socket 2011, for me). And if it's not in this socket, then their server/workstation platform probably won't get it until at least 2019.
 


Why would Intel use HBM when they have already heavily invested in HMC?

http://www.extremetech.com/extreme/185007-intels-next-gen-xeon-phi-will-be-3x-faster-include-next-gen-hybrid-memory-cube-tech

Considering that Intel have the vast majority of the server CPU market share I think they know what they are doing.
 
Intel's experts aren't reading this. Why would they?

They know far more about these products and the underlying technology than any of us. Do you really think they go home and pour over comments on tech websites, at night? Do you think we'd have any ideas they haven't already considered and analyzed?

If they do any work-related reading, after hours, it's probably in semiconductor research journals and various market & macro-economic reports.

Yes, so why waste your time?

If someone exhibits a willingness to learn, then it's worth taking the time and trouble to explain. But I'm not sure Samer is very interested in learning from anything that any of us have to say.

Sometimes we just like to see what people have to say about the stuff we're making. It makes for a good chuckle on our lunch breaks. ; ) I'm just sad the chipset wasn't in any of those photos. IMO there's some cool stuff going on there as well!
 
Greetings, newcomer!

Just curious: what's your job function? I'm not trying to get your full title, etc. Just wondering in what aspects you're involved.

I'm looking forward to using this platform in my next workstation, so thanks for all your efforts.
 
I'm just gonna hazard a guess. The black dim slots denote channel one. The first two pics show 4 channel boards. Only the third pic shows a 6 channel board. If you use the first set of slots for memory (dimm 1, 3 and 5) you could get to 192Gbytes of memory using 32Gbyte dimms. If you use the second set of slots for NVdramm, conservatively speaking, you could have 3Tbytes of storage without using any PCIe ports. Once the boards and 3dXpoint are out, that should jump to 6Tbytes immediately. The only remaining question in my mind is, could you boot from it? Memory as storage on a memory buss has me excited.
 
Status
Not open for further replies.