Why Intel Created The C232 And C236 Workstation Chipsets

Status
Not open for further replies.

bit_user

Polypheme
Ambassador
The C232's main feature isn't listed in the table: support for ECC memory. I think that's the key selling point of the C232 - for small NUC-style workstations and stripped-down servers that require ECC.

Since Nahalem, you had to pair a Xeon (or i3) CPU with a server/workstation chipset to use ECC. The only exceptions to this are a handful of special SKUs oriented towards embedded applications, and they must still be paired with an ECC-supporting chipset.

It strikes me as truly bizarre that neither support HD graphics, though. And no HD Audio is also weird. That suggests the C232 is intended for small servers - not NUC-style workstations. The C236 might be low/mid-range workstation-oriented, as many workstations contain a separate graphics card.
 

Quixit

Reputable
Dec 22, 2014
1,359
0
5,960
The C232's main feature isn't listed in the table: support for ECC memory. I think that's the key selling point of the C232 - for small NUC-style workstations and stripped-down servers that require ECC.

Honestly, almost nothing really needs ECC anymore. Most people using ECC RAM are doing it to hedge their bets. When a system reboot might cost you $10,000 it's worth it. In the sort of systems these are going in to it doesn't really matter.
 

iamacow

Admirable
So basically they made a "locked" chipset to stop people from overclocking the XEONs. Smart move intel because otherwise everyone would buy the cheaper chip and overclock it. Heck I picked up a E5 2670 for $100 the other day and it sightly outperforms the 4960x when overclocked to 3ghz.
 


1. ECC memory support is not supported on all C236 and C232 chipsets. It is an optional feature that OEMs can opt to include.
2. Xeon and Core i3 have nothing to do with each other whatsoever.
3. There have been numerous Xeon CPUs released on several sockets that work with consumer chipsets, it is not limited to workstation/server chipsets and embedded systems.
4. Neither support Intel HD graphics because Intel doesn't use Intel HD Graphics in its Skylake LGA 1151 Xeon CPUs.
5. The HD audio technology is a subsystem of the HD video technology, and so it is also not enabled.
 

tleavit

Distinguished
Jun 28, 2006
145
0
18,680
I may also have something do do with the fact that enterprise Xeon chips have so far out paced home use (or workstation) thats its ridiculous to buy them. Besides the fact that they are packed full of all kinds of instructions for virtualization that 99.99% of people out there don't use. I can run 6 win 7 machines on my Skylake 4.0GHZ now without the CPU even blinking. We run 100 Virtual machines on our Cisco UCS blades with 2 x 8 core zeon procs and the CPU rarely breaks 15%. People today have absolutely no concept of the power these CPU's have today. It's like reading people here bragging about having 32 gigs of ram to play games that only use 4 gigs. There's a reason I can take a 10 year old core duo Dell laptop (D610/810)and put windows 10 on it and its perfectly fast to do normal work on.
 

iamacow

Admirable


I got 64GB on my X79. Planning on going 128GB or 256 for a X99 if i can :)
 

bit_user

Polypheme
Ambassador
Really? What changed, and when?


No, when a reboot costs that much, then you use a server with full RAS features that are only offered with Intel's E7 CPUs and chipsets. There are plenty of cases where errors are more expensive than the small price difference of ECC RAM. Basically, anything involving finance, and a whole host of embedded applications. And any kind of fileserver or database server, where a bad bit could cause corruption in potentially valuable data. These often don't need to be big, expensive systems (and frequently can't be). So, the added measure of protection from ECC is usually enough easily worth a few extra $.

And ECC has little to do with reboots. A bad bit might cause a BSoD, but it often doesn't. And if I see ECC errors in my logs, I'm going to take the machine down and replace the RAM, whereas it might've stayed up longer if the errors went unnoticed.
 

bit_user

Polypheme
Ambassador
Go check the specs. You'll find nearly all i3's support ECC memory, when used on a chipset that also supports ECC. The same is not true of i5's and i7's. That's the only thing I said they had in common.

I never said otherwise. What I said was that some embedded SKUs of non-Xeon, non-i3 CPUs also supported ECC (when paired with the proper chipset). I never said anything about compatibility between Xeons and consumer-oriented chipsets, except that you can't use ECC on such setups.

This was not at all clear to me, from the article. I thought HD Graphics was being used to refer to any processor graphics. According to http://ark.intel.com/products/family/88210/Intel-Xeon-Processor-E3-v5-Family#@Server, some include "Intel® HD Graphics P530".


Honestly, it was a pretty good article, but I expect mods & authors to read comments more carefully, before replying. "With great power... "
 

bit_user

Polypheme
Ambassador
???

Greed is at the very core of what moves all for-profit companies. That goes without saying. The question the article is trying to answer is why Intel thinks this will make them more money.
 
My bad, I miss read a few of your sentences. I thought you were saying that Xeon and i3 were the same, and that Xeon could only work with a workstation/server chipset and required ECC memory. Read over it too quickly, sorry about that.

As for the Intel HD graphics, a select few of them do have it, but I believe those models are used on integrated boards only. I guess to be fair it shouldn't be in the chart, as the chipsets don't explicitly remove Intel HD graphics, but the bulk of them do not unlike the Skylake Core processors.
 

bit_user

Polypheme
Ambassador


No prob. Again, congrats on the articles. All solid, that I've seen. Keep 'em coming!
 

kenjitamura

Distinguished
Jan 3, 2012
195
3
18,695
???

Greed is at the very core of what moves all for-profit companies. That goes without saying. The question the article is trying to answer is why Intel thinks this will make them more money.
???
People should have been able to figure out for themselves that Intel was just trying to squeeze more cash out of their customers and so making a lengthy article to explain something that could be summarized as "intel wanted more money" would make little sense.

It's safe to assume at least some people went into this article expecting to hear an explanation involving a manufacturing incompatibility between Xeon's and consumer chipsets which would be a reasonable motive for introducing a change in policy that is a complete reversal of their last two processor iterations.

Instead they got a wall of text that is summarized as: "Intel thought people were getting too good a deal with Xeon's and so implemented a barrier to keep them out of inexpensive performance builds"

This article is nothing but click bait and imparts no insight or knowledge that'd make it worth a read.

Hence my TL;DR is really all anyone needs to read to know the entirety of the information contained within this article and not feel like they wasted several minutes of their life.
 
Quick question, what is the difference in the CPU's PCIe lanes and the motherboard's? Also, the I7-6500K mentioned in the article does not exist. I think you meant I5-6600K perhaps?

Anyway, I still think these are worth it for gamers and video editors, etc.. Realistically, since I7-6700Ks sell for $420, it's worth it to grab a C232 motherboard and a Xeon. Intel doesn't market the Xeons to gamers, but smart gamers (especially those who stream) snatch Xeons.
 


Yes that was a mistake, should have been i5-6600K.

As for the difference in the PCI-E lanes, the lanes themselves are exactly the same, but the controller is what is important. The CPU PCI-E controller can divide the lanes up as much as x8/x4/x4 in order to support three GPUs, but it cannot be divided any lower. The motherboard PCI-E lanes can be divided into groups of x1, x2 or x4 but cannot be grouped into greater groups than that to support devices like GPUs that require greater bandwidth.

There is also a bandwidth difference sort of. The CPU's PCI-E lanes are fed directly from the CPU, which gives your GPU as much bandwidth with as little latency as possible. The chipset connects using a DMI 3 interface to the CPU, which has bandwidth comparable to roughly a PCI-E 3.0 x4 connection. Although it is probably negligible, the PCI-E lanes coming from the chipset will have greater latency because it has to pass through the chipset first. There also might be a bottleneck at times with the chipset PCI-E lanes because a four PCI-E lanes worth of bandwidth aren't going to be able to feed 20 lanes at the same time. Intel said that it would be hard to saturate the DMI 3 interface, but we haven't been able to test to see yet to know for sure if that is true or not.
 

Rookie_MIB

Distinguished


Ah - so there are in effect two chips which have PCIe lanes - the CPU (which has 16 dedicated lanes to the PCIe slots, and then the appropriate chipset (which are referenced more or less as the HSIO lanes), the CPU accesses the HSIO lanes on the chipset through the DMI interface. I was wondering about that myself - I often was wondering how they would connect the sata ports, the usb ports, the other PCIe x1 ports etc - so this makes sense. IIRC the DMI interface has about 32Gb/sec bandwidth.

USB 3.0 - 5Gbits, 2-3 SATA drives (HDDs) about 800Mbits each, Gigabit ethernet - 1Gbit, some USB2.0 - 480Mbits each. Truthfully, it would be pretty difficult to fully saturate the DMI interface UNLESS you allocated some of the HSIO lanes to a PCIe x4 interface with an NVMe SSD. Even then - that's not guaranteed as most of the read speeds for those drives are 2500MB/sec (=20Gbits) leaving about 12Gbits still on the table.

So - they're probably right. An automated torture test could probably accomplish saturation, but your average user working on their computer would have a tough time of it.
 
Yes that is true it would be difficult for most users to saturate it. Realistically it shouldn't be an issue unless someone is using M.2 SSDs and has something like an extra GPU installed into a PCI-E slot that connects to the chipset and not directly to the CPU. Even then, they would probably need to be performing compute intensive tasks.
 
"Why Intel Created The C232 And C236 Workstation Chipsets"
TL;DR "greed".

lol more like common sense. considering Xenon is supposed to be for business workstations and servers and not for the noob gamers out there who thinks xenon CPU's will be better then a core I7 to play call of duty something or other with

 
Gentlemen?,

An informative and useful description.

Of course, the features and design are interesting, but the performance in real-world system is key. I've been watching results of the Skylake Xeon E3 v5's on Passmark of which there are now 41 systems tested.

This is a still-small sample, but results are interesting. The use seems heavily workstation oriented as 27 of the 41 systems are using Quadros and 24 of those are Quadro M's ,meaning E3-v5 are popular in laptops- Dell Precision M7710. 7510. Thanks to M.2, the highest rated E3-v5 is a laptop (Precision 7710 / E3-1535M v5m /Quadro 5000M / Samsung. SM951 NVMe ). The disk score is the highest I've ever seen in a laptop: = 13622. Also, I don't think I've ever seen a laptop as the highest rated (= 5516) system by CPU search. For comparison my main system is the highest rated HP z420 at 5046 with a E5-1660 v2 (6-core @ 3.7/4.0, CPU=13989) / Quadro K4200 / Intel 730 480GB (disk= 4555). Skylake Xeon does seems to represent a leap ahead for laptops with M.2 at least.

The CPU scores are interesting. the top Passmark CPU score is 10652 from an E3-1275 v5 on a Supermicro X11SSZ0F and that score was achieved using the integrated HD P530 which scored 2D=629 and 3D=1090. the same system was tested with a GTX 970 and the CPU score was reduced a bit to 10517 (2D=802 and 3D = 9217). The memory score was identical at 2690 suggesting that using system RAM for video is not upsetting either the processing nor the RAM effectiveness.

However, when comparing results of the E3-1275 v5 (3.6/4.0) to the Haswell v3 (3.5 /3.9), the top v3 CPU score of 11293 is also using Intel IG: P4700. On an ASUS Z87-WS MB and Plextor PX256M3. the 2D=1066 (excellent) and 3D= 791 (about Quadro K600). ASUS Z87 hold the top 7 spots for CPU performance, but the CPU scores for the slight lower clock speed v3 seem at a glance stronger than for v5. This may be that ASUS WS motherboards (and some Supermicro) extract more from Xeon CPU's than Dell and HP. Anecdotally, this is another suggestion that Intel IG is very efficient, and continues to be better than you'd think it is, (and M.2 is a winner) but not a clear indication that Skylake is walking away from Haswell noticeably- so far. Early days.

I'm looking for forward to the new Broadwell Xeon E5 v4's. There have been a few engineering samples creeping about already. How about this: the- E5-2602 v4 is a 4-core @ 5.1Ghz and the E5-2699 v4 is a 22-core / 44-thread at 2.2 /3.6GHz ? I should dearly love to have a 44-core / 88- thread, 1TB DDR4-2400 workstation to watch cat videos on YouTube- and then do my own oceanic and atmospheric models.

Cheers,

BambiBoom
 
Status
Not open for further replies.

TRENDING THREADS