Skylake Xeon Platforms Spotted, Purley Makes A Quiet Splash At Computex

Status
Not open for further replies.

gangrel

Distinguished
Jun 4, 2012
553
0
19,060
Can't be for just memory, not THAT many. 64, maybe 128, OK, but this is crazy many pins. It's 500 more pins than you would need to do some weird socket 2011 + socket 1151 configuration.

I wonder if the pins are mostly to support interconnections between sections of the FPGA...for parallel processing support, for example...and/or with the rest of the processor
 

therealduckofdeath

Honorable
May 10, 2012
783
0
11,160
It's got a quad channel memory controller, (rumour has it that future processors might have 6 memory channels; that's 384 pins just for the data), add a few for address and other selector bits. Add a 40 lane PCI bus. That requires a lot of pins. The motherboard interconnects wouldn't plug straight into the processor.
 

bit_user

Polypheme
Ambassador
This is the future platform with 6-channel memory. Maybe the first board shown only supports 4, but the board in the second & third pics has 6 DIMMs on each side of both CPUs.

It's tempting to speculate about the black DIMM slots, in the first pic - that they might be only for NVDIMMS, or something. In that case, I don't know why they'd be closest to the CPU, though. Maybe they just go in normal DIMM slots, and this is why Intel went to 6-channel.

Anyway, don't forget about OmniPath. Surely, they'll need some pins for that!
 

bit_user

Polypheme
Ambassador
The incredible size of the processors begs the TDP question
This is mis-formulated. Begging the question is where the answer to the question is included in its phrasing. And even though a large socket suggests a high TDP, I wouldn't say it begs the question.
 

gangrel

Distinguished
Jun 4, 2012
553
0
19,060
Well, ok...but expanding from 256 to 384 pins for memory access, only adds 128 pins, and that's not even related to the FPGA functionality, really. You'll do this any time you push to 6 memory slots. Going to 12 slots might blow things up to 768 for data; not sure how many for control. This slot has *1600* more pins. :)
 

bit_user

Polypheme
Ambassador
The real news is that Intel is embracing FPGAs to counter the death of Moore's law.
It might have more to do with customers like Google wanting to accelerate things like machine learning in hardware. Perhaps it's a hedge against Xeon Phi's successor (Knight's Landing) failing to compete with GPUs. Or maybe OEMs just demanded it, as a way to further differentiate their product offerings.

I think it'd be interesting to know whether Xeon D basically freed them from having to worry about the single-CPU use case. So, they could basically focus Purely on scalability.
 

bit_user

Polypheme
Ambassador
I'm puzzled by what pins people think would be added for FPGA. Why wouldn't it just sit on PCIe or the internal bus connecting the cores?

And the slides we've seen indicate this is only 6-channel - not 12. There can be multiple DIMMs per channel, which is especially common in servers (which use registered memory, with its lower electrical load, specifically for this purpose).
 

Samer1970

Admirable
BANNED
meh ...

6 channel memory means 6 dimms per core minimum , More space instead of compact space !

USE HBM2 INTEL , you will not need stupid 6 channel (HBM2 bandwidth is faster)

and you wont need HUGE motherboards anymore !

you could fit 2x XEONS on a tiny ITX mobo with one x16 slot included !!! with the 12/24 DIMMS slots missing !!!

you could fit 4 XEONS on mATX/ATX !!!

you could fit 8 XEONS on EATX !!!

just make the xeon comes each with 32G (minimum) of HBM2 RAM ... and if you want more RAM add another CPU !

Compactness should be the FUTURE..

AMD , if you are reading this , DO IT !!!

 

m3rc1l3ss

Distinguished
Aug 2, 2007
3
0
18,510
meh ...

6 channel memory means 6 dimms per core minimum , More space instead of compact space !

USE HBM2 INTEL , you will not need stupid 6 channel (HBM2 bandwidth is faster)

and you wont need HUGE motherboards anymore !

you could fit 2x XEONS on a tiny ITX mobo with one x16 slot included !!! with the 12/24 DIMMS slots missing !!!

you could fit 4 XEONS on mATX/ATX !!!

you could fit 8 XEONS on EATX !!!

just make the xeon comes each with 32G (minimum) of HBM2 RAM ... and if you want more RAM add another CPU !

Compactness should be the FUTURE..

AMD , if you are reading this , DO IT !!!

This would compete against absolutely nothing. While 32GB of ram per processor may have been fine 20 years ago, even cheap server boards today support at least 500GB per processor. In enterprise as ram based databases become more prevalent, demand for memory is only going to increase.
 
You might want to clarify the article title. We already have Skylake Xeons, the E3v5 series. These are the E5v5, or the Xeons that go on the successor to the X99 chipset.


Do you have any idea how expensive that would be, requiring the RAM on-package? Or the potential problems it creates with how massive that single point of failure is, should either the CPU or the memory become worn or damaged? And let's not forget the new custom CPU coolers you'd need. Not to mention the customization problems you encounter ( I'll cover this more below ).

And who apart from you is complaining about how ridiculously huge motherboards are right now? What sizable market segment thinks they're inordinately large?

Uh, point of order: what ITX board have you seen that has more than four DIMM slots, regardless of chipset? X99 is a quad-channel CPU/chipset, and even those boards only have two slots. Only a handful of server focused ITX boards have four slots. So no, you're not suddenly cutting even six slots for extra room. Also, changing to HBM doesn't change the overall package size much, because even without the extra pins for the memory controller, you still need space for the memory stacks ( lots of space if you're talking 32GB minimum ). Getting two sockets on an ITX is not going to happen when you need to make space allowances for CPU mounting and cooling.

Right, because no one wants room for extra card slots, M.2 srives, or other devices on the mboard, right? And people that use this much processing power, typically enterprise solutions, usually build their servers in individual boxes? They don't use a completely different form-factor on a rack?

Right, because it makes perfect sense to force a customer, when making what used to be a simple and relatively inexpensive operation of upgrading RAM, to spend extra money because they now have to buy a CPU as well. And no one could ever want or need a different configuration, like say more RAM, or non-ECC RAM, could they? Every variant of different capacity, RAM type, and speed would result in Intel creating another SKU. That complicates their manufacturing in an exponential way, which drives the price of these even higher.

These are Xeon E5s, used by fewer consumers than even the HEDT extreme CPUs. Those people are very particular about their setup, and forcing them into a one-size-fits-all will not fly. You know who does use them a lot? Businesses. Intel's lifeblood is the server market. You really think they're going to make some radical change that would force businesses to completely change over to a new form-factor, cooling solution, and RAM paradigm when they want to upgrade? Some people may be that short-sighted, but Intel is not.

It's only one aspect of the future, and not one that takes priority over everything else to everyone else.

 

Haravikk

Distinguished
Sep 14, 2013
317
0
18,790
Those sockets look the same size as large smart phone!

I wonder why they're positioned centrally like that though, as surely that means the heat from the front CPU is going to be blowing over the rear one, wouldn't it make more sense to position both sockets at the rear of the machine with all of the RAM in front? RAM can get warm too, but shouldn't be so hot that it will impact cooling of the CPUs much, and putting them at the back lets them exhaust as directly as possible.
 

bit_user

Polypheme
Ambassador
Like m3rc1l3ss said, that won't work because most systems in this class need more memory.

Intel has announced a 72-core CPU (code name: Knight's Landing) that will feature 8-16 GB of HBM2/HMC memory, in addition to external memory. However, that CPU will probably be about the most expensive thing you can put in these sockets.

That said, I'm sure we'll see HBM2/HMC in desktop & mobile CPUs, as prices come down. It makes a lot of sense, especially for APUs.

As a side note, whenever you see something that you think makes sense that these companies aren't doing, there's usually a good reason for it. These companies have many smart engineers and managers, who follow trends in semiconductor research (as well as doing internal research), so I'm sure they know what's possible.

Sometimes companies make a bad strategic move (for example, ATI/AMD selling off their mobile GPU business), but when it comes to the details of their product portfolio, if there's a competitive advantage they can exploit, whether they do it is largely a matter of economics.
 

bit_user

Polypheme
Ambassador
RAM needs to be close to its respective CPU, for good performance. Putting it farther away might also increase the electrical load.

In general, server CPUs only need to be cooled enough to stay in spec. They're designed to be run hot, so that data centers can reduce their air conditioning costs. Within a chassis, the airflow has to be streamlined, in order to reduce electrical losses in fan power.

Finally, you haven't seen the heatsinks. It's entirely possible that the front CPU's heatsink is shorter than the rear's.
 

bit_user

Polypheme
Ambassador
I remember Pentium Pros being big, but I didn't think they were that big. Makes me nostalgic...

According to what I can find, Purley will be
either 76mm x 51mm or 76mm x 56mm
(source: http://www.cpu-world.com/news_2015/2015061701_More_details_on_Intel_Purley_platform.html) and Pentium Pro was 63mm x 68mm (source: Upgrading and Repairing PCs By Scott Mueller).

Not far off.
 

DavidC1

Distinguished
May 18, 2006
494
67
18,860
It's tempting to speculate about the black DIMM slots, in the first pic - that they might be only for NVDIMMS, or something. In that case, I don't know why they'd be closest to the CPU, though. Maybe they just go in normal DIMM slots, and this is why Intel went to 6-channel.

Very interesting. They were talking about increasing total capacity of memory by 4x.

Black DIMM: Regular DIMM 1x Capacity
Blue DIMM: NVDIMM 4x Capacity

So 4x capacity with total of 5x indicates that the regular DIMM is used as a cache of some sort to mitigate the impact of somewhat lower performance of Optane DIMMs. 1:4 ratio is pretty large for a cache, so they can raise the minimum bound of performance.
 

Samer1970

Admirable
BANNED


I was not talking with you , I was talking with Intel .

your Opinion does not matter much to me :)

you are not the expert here :)

I wont reply to each of your comments . its a waste of time. Intel experts will see the potential of my ideas.
 

Samer1970

Admirable
BANNED



Intel releases many CPUS for many sectors ... the ones who need crazy 1 Tera byte Memory can have their larger Motherboards with DIMMS Slots ..

The ones who need around 32-128GB or so can use the HBM2 CPU ... (32G each CPU)

it is time to eliminate the dimm slot for most of the PCs around outside those who need crazy Memory which can still exist .

The Majority wont demand more memory this "class" is from desktops to workstations to servers with less than 128GB RAM per MOBO ...



and it is not that expensive as it looks to "them" dont worry about it ! they can lower the price .

I hope AMD and Intel are reading this.






 
Status
Not open for further replies.