Intel To Infuse Second-Gen Xeon Purley Platform With 3D XPoint, Tightens Data Center Stranglehold

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
Intel is doomed. Over the next 20 years, you will see that company fall apart because they try to force proprietary interfaces when the whole market wants open ones. AMD will start shipping 8 core consumer chips, while intel keeps selling dual core skylake/kabylake for over $300 in mobile.. Hopefully zen can at least match Broadwell performance, all these companies want, is something semi-competitive, and the know they will win on price.
 
AMD's(xpoint like) idea is better.
It works with current architechure so that I don't have to throw away tons of hardware to get what intel is doing.

Looks like intel didn't learn anything from the RDRAM fiasco...
 
"Looks like intel didn't learn anything from the RDRAM fiasco..."

They didn't have the market pretty much cornered at that time. They probably think with their huge market share they can force it on people now.
 
I'm still waiting for 1st gen Purley, dang it!

Actually, I'm planning on replacing my Sandybridge-E workstation with Kabylake-X. So, even after it arrives, perhaps I'll still have a bit of a wait.

Just bought a DC P3500 PCIe SSD to take advantage of my abundant PCIe 3.0 lanes. That should hold me over, a bit longer.
: )

In either case, we weathered a barrage of takedown requests associated with our article, and more specifically, the pictures.
ThankYouThankYouThankYou. Many internets to you, good sirs!
 
I disagree on the concept that Intel is doomed. The propitiatory interface push is going to give them a short term advantage which will continue to lock market share. Even without significant processor performance gains the overall marriage and bottleneck reductions will give them a significant performance gain that will give them dominance without the need for die shrink cycles that can't be maintained. Gen-Z may eventually catch up, but all Intel would have to do is tack the interface on the side of their hardware at that point (even though I don't think they will). Rambus didn't work out, but the primary reason was that DDR (And its evolving products) offered on par performance for commodity pricing and both were on the market together. Gen-Z is years off from real world application where the Intel interconnects are due out in the next calendar year. Along with that the Gen-z hardware will be data center oddities to begin with while the Intel interconnects will have already trickled down to be part of the PC ecosystem long before. With the speed of technology development the first to market always has an advantage and product life cycle is measured in months. Gen-Z risks being obsolete before it even sees the light of day. In the places that Intel is viable it can only lose, because they have so much of the market share. Gen-Z represents a significant threat to Intel as it would homogenize the data center backbone in such a way that competing CPU architectures would be able to coexist on a shared hardware fabric level with no cost penalty to change CPU Architecture other than software. As Moore's Law is reduced to a failed hypothesis (due to the size of the atom) manufacturing process is bound to no longer define performance. Highly specialized core structure and parallelism will be the only direction to go and I highly doubt that Intel's dominance will remain (as design refinement seems to doing pretty well in the fab-less areas) without propitiatory isolation of its architecture. Intel isn't stupid and will protect itself. As said before I wouldn't hold my breath for Intel to jump on the Gen-z bandwagon anytime soon.
 
I agree with your wall of text.

Just because their competitors are organizing doesn't mean they're automatically doomed. It does mean they'll face pricing pressure and will possibly be forced to add standard interfaces to certain SKUs. But if they can penetrate new markets or grow their share of existing ones faster than their margins decrease in their existing cash cows, they can maintain and even grow.

BTW, I'd like to see Intel be half as enthusiastic about PCIe 4.0 as OmniPath or the rest of these proprietary interfaces.
 


I agree with most of this as well. The strategy is smart (actually, brilliant), well planned and well timed, which are the kind of characteristics that have propelled Intel into its dominating position in the first place. Intel executes so well, they are certainly going to make a lot of headway before the competition even gears up. In fact, much of the foundation for all of this is already there; they began Omni-Path, for instance, several years ago. I do hope, in the interest of fair play and pricing, that at least one of the three new consortium's gains traction.
 


The DC P3500 is a really, really solid SSD. Good choice, if I were to buy one right now for personal use, it would definitely be the P Series, and the 3500 is hard to beat for value. Just be sure to keep some gentle air on it :) If you dive into the command line isdct management utility there is some interesting information, fun stuff to play with. For the sake of science, of course.
😉
 


This is not about consumers.
 
Your review of its successor helped convince me. For some reason, the price on the 400 GB model momentarily dipped to $225, which is a good bit cheaper than the consumer version (750 series). That's about $1 per TBW, BTW. It's now closer to $300, last I checked. I feel it's still a pretty good deal, given how well it holds up against Samsung's latest.

Anyway, one thing I noticed is that it ships with the sector size set to 512 bytes. I already had to install their isdct package to upgrade its firmware (can't use their SSD Toolbox with this drive, for some reason). I'm curious what the performance difference between 512b and 4k will be.

http://www.intel.com/content/www/us/en/support/solid-state-drives/data-center-ssds/000016238.html

BTW, their specs want 100 lfpm of airflow. I'm probably nowhere close to that. But my case temperature is also much cooler than the 55 C that spec assumes, and I'm also not subjecting it to nonstop QD256 worth of database queries. There's a "yellow" LED, which warns of overheating. I was quite nervous until I discovered that it's also got an "amber" LED which signifies normal activity. Nice one, guys.
 
In this context, your statement "In the grand scheme of things, proprietary implementations aren't good for pricing nor innovation. " is total nonsense. As a Linux user I am generally willing to salute the open source flag for any reason, but not here. Over the last 30+ years consumers have experienced phenomenal benefit from bringing additional functionality either on die or on package. One lesson learned is that, from a consumer perspective, nobody cares what standards are used when a technology is brought on package or on die if it works well and benefits either performance or cost. Obviously, this is a big detriment to producers whose proprietary technology is abandoned when alternate functionality is brought on package or on die.

All those fancy named new consortium's are fighting a rear guard action to delay the benefits resulting from increased integration. Obviously. if you have proprietary technology and you feel you can gain a competitive advantage by promoting your propriety stuff as an open source solution, you go for it. From a consumer benefit perspective, increased integration rocks.
 
They care when they have to pay a monopolist whatever prices they feel like charging. And proprietary standards affect the rate of evolution, in the industry.

Seems like you're posing a false dichotomy between open standards and increased integration. Intel could integrate an alternative to OmniPath just as easily. Integration of either protocol would bring the same benefits, but if they integrate a proprietary standard (and customers are willing to buy it), then they own the stack and all the other infrastructure that comes with it.

I think the issue is that you're looking at this too narrowly. I'd guess you don't care, as long as the software stack is non-proprietary. That's important, but it's not the whole picture.
 
I think the important thing to remember here is that we are not dealing with small business. The players involved are companies that have gross revenues that beat the GDP of many countries today. The amounts of money and number of people involved in developing these interfaces are staggering to say the least. Not a single one of these companies are either developing their own proprietary interfaces or getting on board with a consortium for altruistic purposes. They are doing it because their leadership sees their path to be the best path to do one thing, make money. We can argue semantics and play armchair football (which is actually pretty fun). I think though if we look at the situation here first we need to look at the consortium members. Everyone in that group stands to gain something from the development of the open architecture and not one can really justify developing their own proprietary interface either due to cost or lack of ability to gain vendor traction. Intel on the other hand really had something to gain by making the investment. I highly doubt that if any of the companies involved were in Intel's spot here that anything would be different other than the name we would be attacking would be different. Open standards may make the tech they involve readily available and inexpensive, but don't fool yourself they are full of compromises and once established can be slow to move. Proprietary designs can be more customized and integrated and tend to move quicker. My example here is why are we already on thunderbolt 3 when I am still using 1000 base T for my home network. The answer is simple. Show me the money.
 
Maybe I am missing something. Can anyone identify any instance in history where the on package or on die interconnect standard was a material factor driving technology selection. I am not talking about folks who purchase a sound card because they want better quality sound. That is a purchase driven by a preference for upgraded technology even though the interface may be downgraded. Open standard matter a lot, just not after technology is brought on package or on die.
 
Also lets get away from the on die concept and just stick with bus arch. IBM created ISA. Intel created the original PCI bus. Intel created AGP. Fundamentally pretty much all of the "open" standards you think of today started out in a propitiatory form to begin with that opened up to open standards for vendor support reasons. Many were turned over to consortium groups to maintain because there was no profit it to maintain them.
 
So, with whom are you arguing, exactly? I think we all understand how the game works. Suppliers want proprietary standards, while customers want open standards to avoid vendor lock-in.

true.

Bad example, not least of all because thunderbolt 3 has a cable length limit of 3 m, while copper-based 10GBASE-T reaches 55-100 m. More importantly, 10GBASE-T has been standardized for 10 years.

You're not taking issue with the fundamental premise of the article, are you? The point of integrating OmniPath into the CPUs is clearly aimed at dominating the other boxes in the datacenter that they don't currently own. If they had a weaker position, they'd have adopted a standard, like 100 gigabit ethernet. But they clearly decided to leverage their CPU dominance to wedge themselves into more layers of the stack.

Like how Nvidia has GSync, CUDA, and NVLink, while AMD has FreeSync, OpenCL, and PCIe. Nvidia can only push their proprietary standards because they're top dog.
 
Status
Not open for further replies.