- VROC allowed (and still allows me) me to use 4 intel Optane drives saturating PCIe 3.0 X 4 for 4 drives in RAID 0 with 10DWPD endurance individually/drive. Outstanding in 2019. Of course you can do better today. But 10DWPD endurance yeah that is pretty secure... Even with 4 years of work, I today still have 100% life left in them (SMART read out)
Yeah, endurance of Optane is pretty nuts. IIRC, the last gen DC drives had endurance of like 100 DWPD? They're really aimed at applications like a caching tier or journalling for a much larger flash tier.
It's hard for an end user to do anything with them which truly exploits their performance characteristics. For normal desktop usage, the vast majority of reads are serviced by the page cache or kernel read-ahead. Writes are again buffered by the kernel, unless an app goes out of its way to do a
fsync(). So, other than cold boot times, it's hard to really "see" the benefits of Optane vs. a fast NAND-based NVMe drive.
Funny thing is that I had always aspired to own an Optane drive. When Intel announced it was being discontinued, I panicked and bought a 400 GB P5800X drive. I still haven't put it into service, however. I wonder if it might appreciate in price, once the supply finally dries up. If not, I'll probably go ahead and use it as a boot drive.
In the meantime, I bought an Intel/Solidigm P5520 to use for that, which I expect will be more than enough performance for me. It cost less than 1/3 the price and is nearly 10x the capacity. Both drives are PCIe 4.0 and the P5520's 1 DWPD endurance is more than enough for my use cases. Heh, when you do the math, 1 DWPD @ 10x the capacity is equivalent to doing 10 DWPD on the Optane drive!
A good I found place to buy these drives is Provantage (both Optane and Solidigm):
Get Fast Service and Low Prices on Search Results for P5520 and Over 500,000 Other Products at Provantage.
www.provantage.com
They don't currently stock all models of Solidigm drives, however. There are higher-endurance models I don't see on there, in case you ever need that. The one I bought is for mixed read/write workloads - they have a lower tier that's QLC-based, for read-oriented usage.
Of course I was talking about AVX-512 units (FMA) that is what i wrote and there are 2 AVX-512 units per core so i total 36 AVX-512 units for a 10980XE.
See, it's the talk of "units" which threw me off. AVX-512 is more than just FMAs. What are either single or double is the number of FMA ports, depending on the Skylake/Cascade Lake model. In Ice Lake SP and beyond, Intel is always enabling both FMAs per core.
The Sunny Cove and Willow Cove cores found in the laptop Ice Lake and Tiger Lake SoCs physically have only one FMA per core. You can find die shot comparisons of the laptop & server Sunny Coves which show where the second FMA was bolted on. In contrast, the client Skylake cores physically lack any AVX-512 - again, confirmed by die-shot comparisons. Finally, the difference between client & server cores was carried through even to Golden Cove, where the client physically has a single AVX-512 FMA port, even though AVX-512 is disabled on them.
yeah, what can i say. There are a lot of bad/unqualified programmers who do not have the necessary theoretical back ground or right incentive to design and write good systems. I recommend them (not you) to go back to school and take some good classes to further the knowledge.
I think people tend to be rather self-selecting. Either you're interested in milking the performance out of your hardware, and willing to do what it takes to accomplish that, or you're not. Ignorance is part of it, but these days the information is so readily available that ignorance primarily comes from lack of interest/curiosity/drive/etc.
On the bright side, we can think of it as job security. We're probably not far off from the age of a billion people on the planet with some idea about how to write code. Then, there's AI-driven code generation, as well. So, having detailed hardware knowledge and a good track record will be real selling-points, in the job market. I hope.
my previous system before the 10980 was actually an Intel Xeon W3690 / x58 based system with 6 cores (Xeon) and I still use that machine for common tasks (not dev) and it is still running stable now for more than 12 years. Just upgrade the GPU every now and then. Those days you could put a Xeon W3690 into a X58 consumer board - amazing platform! OC Stability is like 100% for 12 years now. Honestly, you do not really need more that that for common every day tasks if you have a good GPU.
Yeah, I had one of those at work, for a long time. Just this year, the PSU probably blew a capacitor, because I smelled some smoke and noticed the machine was off and would no longer power on. I hadn't really been using it since the pandemic, but it made a decent Linux desktop. Years ago, I'd upgraded the graphics card to a GTX 1050 Ti, which drove my 4k monitor perfectly (for my needs, at least).
I agree the W790 platform is an insult to enthusiast HEDT users like me and i think you. W2400 or W3400 series alike, they are not priced for the enthusiast market, but only for enterprise market. and even then, putting on my enterprise hat, I would not be able to justify the performance vs price per core ratio.
They are hoping you need AVX-512 and/or AMX. That's their main value-add for the lower end W2400 models. Maybe also higher memory capacities, since it supports RDIMMs. And I guess PCIe lanes, for people running multi-GPU setups.
So i am hoping that Granite Rapids will give opportunity for intel to produce an enthusiast version. Sapphire and Emerald are out of reach. holding out 1 more year and see what comes...
In the next generation, Intel is going back to having 3 different sockets. However, even the smaller socket is going to be a similar size to their current one. So, I'm not really hopeful that they will ever go back to having a HEDT platform within my price range. I've made my peace with that, given how far the client platform has come.
I just wish the DMI connection had been upgraded to PCIe 5.0, because that would offer a lot more expandability. In contrast, I don't care at all that the x16 slot is PCIe 5.0. There are no PCIe 5.0 graphics cards, I don't need one, and I don't need a PCIe 5.0 SSD. The only place PCIe 5.0 would've been useful to me is the one place they didn't put it - the DMI connection!