Intel Announces Xeon W And Purley Workstation Processors

Status
Not open for further replies.

bit_user

Polypheme
Ambassador
Notably, Intel scrubbed the 16C/32T and 12C/24T Skylake-X equivalents from the lineup. We also notice adjusted base and Turbo Boost frequencies across the entire family, and the processors only support Turbo Boost (TB) 2.0. Intel's TB 3.0 does not make an appearance.
So... I guess this is their solution to the dismal thermal situation under that heat spreader. I'm in no hurry, so we'll just see if the next gen has solder.

All the processors come with a full complement of 48 PCIe 3.0 lanes and dual 512-bit FMA units for AVX-512
nice!

 

bit_user

Polypheme
Ambassador
BTW, had to LOL at the "Expert Workstation" branding. As if to say "you're a n00b unless you spend at least $1.7k each, on a pair of 6-core CPUs".

But I am glad that Purley workstations will be a "thing", because it means we can buy used server CPUs and probably find a desktop board to put them in.
 

Ne0Wolf7

Reputable
Jun 23, 2016
1,262
5
5,960
BIT_USER, you already can put sevrer procesors in cosumer boards. I have a Xeon E3-1231 V3 in my ASRock Z97 PRO4.
And I agree, it is funny with the whole "expert" branding.
 

bit_user

Polypheme
Ambassador

Yes, and I like that. Even E5-series Xeons, which are their mainstream server line.

But I was worried those day were at an end. I was referring specifically to their new LGA-3647 socket. I had feared they were going to make it a strictly server-only socket.
 

GR1M_ZA

Reputable
Apr 29, 2014
418
1
4,960
So still only 48 PCI-E lanes for a $10K processor ? Come on Intel ! ! ! Then AMD Threadripper is still a good buy at $1K for 64 PCI-E lanes if that is what you need. At an AMD vs Intel comparable C/T count the Intel offering is over $2K more expensive !? DAMN.......
 

michael.j.j.obrien

Prominent
Aug 30, 2017
2
0
510
Wow 10K. I don't think I will. If I could afford it for a desktop I don't think current desktop cooling technology would handle 205W and 20+ cores. I would imagine that all the Xeon cores are locked due to thermal management.

It puzzles me why overclockers ask if these are locked. Usually they are because most are destined to live in rack mounted server arrays. So overclocking individual processors is not a realistic option as yet.

The Skylake X processors and all the other unlocked processors are aimed at Desktop overclockers. I guess the grass always looks greener!

I get why using retired Xeon processors is a good way to go. Where do I get them from?
 

These articles get into building viable 32-thread and 40-thread systems. The 32-core system seemed more likely to pan out well, IIRC: https://www.techspot.com/review/1155-affordable-dual-xeon-pc/

Here's the 40 core: https://www.techspot.com/review/1218-affordable-40-thread-xeon-monster-pc/

Still...with 32-thread Ryzen Threadrippers, the only reason to go with Xeons if if you need ECC memory along with the other precision features.

For jobs like animation, architecture, or structural engineering, you never need that level of precision as either lives don't depend on it or you have conservative factors of safety.
 

GR1M_ZA

Reputable
Apr 29, 2014
418
1
4,960


All Ryzen Threadripper CPU's also supports ECC memory.
 

michael.j.j.obrien

Prominent
Aug 30, 2017
2
0
510
I see Threadstripper as a good choice for gamers it has the grunt in the right places. But it can't live with the Intel Xeons and not close to the X processors. The Thermal issues are the limiting factor and to some extent AMD shy away from open reliance on third party cooling. This in turn leads them to produce very safe thermal solutions. Mainly because they have experience and knowledge from GPU designs up to 91C +.


I am currently writing device drivers and control software for a closed loop AIO system. I have been writing drivers for over 30 years and some of the chaos in this area of windows is very worrying. Companies, some household names, have cut corners to get to market faster it has resulted in drivers that read the MSR register for temperature and CPU status running free without software synchronization.

As an example SIV64 uses the synchronization correctly but often gets hammered by drivers that don't. Hence the 0.0 readings you sometimes see. Add to this the fact that the temperature readings implemented in Windows has not been completed. Not that it would be fast enough anyway!

Before Intel or AMD make a move towards much more reliance on third party coolers it might be an idea to formally define how it should be done. A interface to access the MSR register and a few I/O pins reserved for cooling is now a must have O/S resource. I am a little surprised it is not already there.

 

bit_user

Polypheme
Ambassador

Don't confuse ECC with "precision". In fact, I think Xeons don't have any features which affect precision - only stability, manageability, and perhaps security.

ECC is worthwhile whenever memory errors could be costly. Either due to down time, or because you're modifying some high-value data. IMO, ECC is mandatory for most servers and worthwhile for most business machines. The only times I wouldn't bother with it is in gaming machines, most consumer electronics, and low-cost PCs used mostly for web apps.
 
I thought there was some difference with quadruple-precision with Xeons, but from what I could find on Google just now, that doesn't seem to be the case.

My last engineering job ran custom-built overclocked cheapo Core 2 Quads with standard DDR2 (admittedly, that was 5 years ago). Stability was a problem. I wanted to upgrade to proper workstations, but nobody else seemed to care much about the quality of the computers. To that end, I built a Sandy Bridge i7-2600K system for work that I still use as my main rig at home today. But I guess in conventional structural engineering, errors aren't very dangerous due to design reviews and factors of safety. If you're designing something like the Burj Khalifa, I'd want ECC.
 

bit_user

Polypheme
Ambassador

Now you're scaring me.

A single-bit error could occur at any point in the process - even post-review, before the final data is sent out for construction.

Here's one, fairly recent survey:

http://arch.cs.utah.edu/arch-rd-club/dram-errors.pdf

Although, the actual probability of an error occurring in the working copy of your data is small, multiply that by the amount of time it's being edited by all the different people working on it, and it starts to become significant.

IMO, it's just irresponsible an inexcusable not to use ECC for this, particularly when you consider that the Xeon versions of most CPUs have a roughly comparable cost. Back in the Core 2 days, I think you didn't even need a Xeon - just a motherboard that would support it. Sure, ECC ram isn't available in the highest clock speeds and might have a couple more cycles of latency, but the real-world performance impact is negligible and the price difference is small (assuming we're talking about unbuffered).
 

bit_user

Polypheme
Ambassador

Did you see that the Xeon W-2195 has 18 cores in a nominal TDP of 140 W? The price is listed as TBD, but it'll surely be much less that $10k. I don't know if you saw it, but the price list for the W processors is in the second slide of the first set.

It'd be crazy to see a workstation with dual 28-core 8180's. I have difficulty imagining a compelling workload for it, but I guess maybe some kind of CPU-only renderer. Or perhaps if you want to set a world record in Linux kernel recompilation times...

http://phoronix.com/scan.php?page=news_item&px=AMD-TR-1950X-Compile-Times
 
Status
Not open for further replies.