News Intel has PCIe 4.0 Optane SSDs Ready, But Nothing to Plug Them In To

twotwotwo

Reputable
Aug 10, 2019
47
18
4,535
I get companies can try and fudge the reasons for their actions, but if it ends up clear Intel delayed an SSD or turned off features they'd built because they worked too well with others' hardware, isn't that the sort of thing that gets you sued for anticompetitive behavior? Maybe they fight it off, maybe not, but there's downside risk.

They should just release the best SSD they can when it's ready, and if the timing means they have to get clever to get good PR, do what they have to. Maybe review samples come with conditions about what platform you test on, or if there's even an Intel-internal PCIe4 platform to test on you let the press mess with that.

PR difficulties are temporary, consent decrees are long-term.
 
I get companies can try and fudge the reasons for their actions, but if it ends up clear Intel delayed an SSD or turned off features they'd built because they worked too well with others' hardware, isn't that the sort of thing that gets you sued for anticompetitive behavior?
Maybe they are releasing it because they already tested it on ryzen and it's still faster on intel with PCI3 than it is on ryzen with PCI4...the interface alone doesn't make things go any faster,you need a CPU that has enough cycles to get enough info off the disk.

Maybe it is faster on ryzen and intel will still release it because they main market for optane is optane RAM DIMMs and intel made the same amount of money from that in the last quarter that AMD made from all desktop CPU and GPU sales put together...
 

JayNor

Reputable
May 31, 2019
426
85
4,760
Intel reported Ice Lake Server chips in the lab, booting linux, in Dec 2018. These have PCIE4. They may not be available to general public, but Intel samples their server chips to big customers long before general release.


sampling Ice Lake Server since May, according to this
https://www.anandtech.com/show/14314/intel-xeon-update-ice-lake-and-cooper-lake-sampling

intel also shipping stratix 10dx with pcie4

https://www.tomshardware.com/news/intel-stratix-10-dx-upi-cxl,40436.html
 
Last edited:
  • Like
Reactions: Makaveli

thGe17

Reputable
Sep 2, 2019
70
23
4,535
This means no PCIe 5.0 in the near future .... did Intel cancel it ? I thought it was coming in 2020 ?

PCIe 5.0 has been announced with Sapphire Rapids SP for 1Q21 since 2Q19 with the updated roadmap (and the roadmap hasn't changed until now). The upcoming Whitley Platform (Cooper Lake & Ice Lake SP) will support up to PCIe 4.0 with Ice Lake, so maybe boards will be PCIe 4.0-ready in general (because otherwise I would have expected a different platform for Ice Lake SP CPUs with specific PCIe 4.0 support.).

Btw ... the point this article tries to make, feels like click bait. What's the problem? The SSDs are in their final development/validation stage but not released yet. They support PCIe 4.0 and will work well with for example IBM and AMD and will most likely also work with every 3.0-system in general (when they're officially launched any time soon). And in a few months, Intel will launch Ice Lake SP with native PCIe 4.0 support (and of course Intel can already test them together with Ice Lake SP in the lab).
It feels like everybody is searching and hoping for some kind of failure for a headline. Absolute nonsense. Should the team be forced to delay their development to be exactly on time to avoid to create a false appearance? :-x
 
Last edited:

bit_user

Polypheme
Ambassador
This means no PCIe 5.0 in the near future .... did Intel cancel it ? I thought it was coming in 2020 ?
IIRC, an Intel server CPU with PCIe 5.0 is on their roadmap for 2021. They need it for CXL, which is how they plan on communicating with their datacenter GPUs and AI accelerators. So, I'm pretty sure it'll actually happen next year.

However, PCIe 5.0 won't be coming to desktops, in the foreseeable future. It's more expensive to implement, burns more power, and has tighter limitations on things like trace length. Beyond that, there's no need, as even PCIe 4.0 is currently borderline overkill, for desktops.
 
  • Like
Reactions: alextheblue

thGe17

Reputable
Sep 2, 2019
70
23
4,535
[...]and intel made the same amount of money from that in the last quarter that AMD made from all desktop CPU and GPU sales put together...

Not exactly but also not far-fetched either.
In 3Q19 AMD hat 1.80 B. US$ revenue over all and 1,27 B. revenue in the Computing and Graphics segment (consumer/mobile CPUs, chipsets and all GPUs, additionally some services and IP).
In 3Q19 Intel already had 1,3 B. revenue in the Non-Volatile Memory Solutions Group. (Additonally the IoT Group had 1.2 B revenue and their Client Computing Group had 9,7 B. revenue in 3Q19.)
 

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
IIRC, an Intel server CPU with PCIe 5.0 is on their roadmap for 2021. They need it for CXL, which is how they plan on communicating with their datacenter GPUs and AI accelerators. So, I'm pretty sure it'll actually happen next year.

However, PCIe 5.0 won't be coming to desktops, in the foreseeable future. It's more expensive to implement, burns more power, and has tighter limitations on things like trace length. Beyond that, there's no need, as even PCIe 4.0 is currently borderline overkill, for desktops.

There is a huge need for PCIe 5.0 , we will not need 16 lnes anymore for GPU and this will open up more lanes for other cards ...

also I can see thunderbolt 5 with 4 lanes of PCIe 5.0 potential , an externa GPU with the same bandwidth of 16 lanes 3.0 today...

also docking stations for notebooks ... will be something huge. I know it will draw alot of power , but can be limitied to plugged power only and not batteries when docked only.
 

bit_user

Polypheme
Ambassador
There is a huge need for PCIe 5.0 , we will not need 16 lnes anymore for GPU and this will open up more lanes for other cards ...
I follow this argument, and it's one of the more compelling ones for yet faster speeds. But, you need to think through how such a product would be introduced to the public.

For instance, let's say AMD decided to make their PCIe 4.0 GPU slots x8. ...except, what if you want to put a PCIe 3.0 card in that slot? Even if the slot is mechanically x16, you don't want to give up half the lanes for PCIe 3.0 cards. So, you have to go ahead and make it a full x16 slot, anyway.

Looking at the flip size, if they made the RX 5700 XT a PCIe 4.0 x8 card, and someone plugged it into an Intel board, they'd be upset about getting only 3.0 x8 performance.

So, the catch in what you're proposing is that there's no graceful way to transition to narrower GPU slots that won't burn people who don't have a matched GPU + mobo. If pairing an old GPU with a new mobo, or a new GPU with an old mobo, you still want x16. So, that means you can't get lane reductions in either the card or the mobo. At least, not when first introduced, which is when the technology would have the greatest impact on price, power, and other limitations (e.g. board layout). There's just no easy transition path to what you want, even if the downsides could eventually be solved (spoiler: I think they can't).

also I can see thunderbolt 5 with 4 lanes of PCIe 5.0 potential , an externa GPU with the same bandwidth of 16 lanes 3.0 today...
That ain't gonna happen. You should read more about the limitations of Thunderbolt 3, which will require a very short, very expensive cable to reach its maximum potential. And that's still PCIe 3.0.

also docking stations for notebooks ... will be something huge. I know it will draw alot of power , but can be limitied to plugged power only and not batteries when docked only.
This is actually conceivable. You could put the connector right next to the CPU. There's not much concern about legacy, because docking stations tend to be proprietary, anyhow (except for USB3 or Thunderbolt -based ones, but let's leave those aside). The biggest issue would be making the connector robust enough to deal with frequent plugging/unplugging, dirt, etc. Those might be the deal-breakers, in that scenario.

Given how well laptops could "get by" with a slower, narrower link (if not just Dual-Port Thunderbolt 3), I don't know if the demand would be there to justify it.
 

bit_user

Polypheme
Ambassador
In 3Q19 Intel already had 1,3 B. revenue in the Non-Volatile Memory Solutions Group.
But that should include their entire NAND-based SSD business, as well. I have a few NAND-based Intel SSDs, but no Optane models.

Given the current $/GB and limited software support, I'm just skeptical that Optane has yet ramped very much.
 
  • Like
Reactions: TJ Hooker

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
I follow this argument, and it's one of the more compelling ones for yet faster speeds. But, you need to think through how such a product would be introduced to the public.

For instance, let's say AMD decided to make their PCIe 4.0 GPU slots x8. ...except, what if you want to put a PCIe 3.0 card in that slot? Even if the slot is mechanically x16, you don't want to give up half the lanes for PCIe 3.0 cards. So, you have to go ahead and make it a full x16 slot, anyway.

Looking at the flip size, if they made the RX 5700 XT a PCIe 4.0 x8 card, and someone plugged it into an Intel board, they'd be upset about getting only 3.0 x8 performance.

So, the catch in what you're proposing is that there's no graceful way to transition to narrower GPU slots that won't burn people who don't have a matched GPU + mobo. If pairing an old GPU with a new mobo, or a new GPU with an old mobo, you still want x16. So, that means you can't get lane reductions in either the card or the mobo. At least, not when first introduced, which is when the technology would have the greatest impact on price, power, and other limitations (e.g. board layout). There's just no easy transition path to what you want, even if the downsides could eventually be solved (spoiler: I think they can't).


That ain't gonna happen. You should read more about the limitations of Thunderbolt 3, which will require a very short, very expensive cable to reach its maximum potential. And that's still PCIe 3.0.


This is actually conceivable. You could put the connector right next to the CPU. There's not much concern about legacy, because docking stations tend to be proprietary, anyhow (except for USB3 or Thunderbolt -based ones, but let's leave those aside). The biggest issue would be making the connector robust enough to deal with frequent plugging/unplugging, dirt, etc. Those might be the deal-breakers, in that scenario.

Given how well laptops could "get by" with a slower, narrower link (if not just Dual-Port Thunderbolt 3), I don't know if the demand would be there to justify it.

Well we will have to "move on" one day , we cant be stuck using long 16 lanes slots forever just to make people with older cards "happy"

I think that 16 lanes slots should disappear for GPU once we get the PCIe 5.0 ready in desktop market. if it happens I mean.

As for the limitations of the cable length of the TB3 , we do ave eGPU boxes already and I dont think that TB5 will be any difference. expensive cables ? they still have a market.

And finally , about the notebook docking station , the demand is here but the technology is not yet. PCIe 5.0 will open it .. and the demand will follow , it will change the desktop replacement forever.
 
Last edited:

atlr

Distinguished
Oct 4, 2006
5
0
18,510
Buyers with Power9 and Zen2 systems are looking for PCIe Gen4 peripherals right now. Intel holding product release to sync with Intel CPUs would give competitors opportunity to establish market share.
 

spongiemaster

Admirable
Dec 12, 2019
2,273
1,277
7,560
Maybe it is faster on ryzen and intel will still release it because they main market for optane is optane RAM DIMMs and intel made the same amount of money from that in the last quarter that AMD made from all desktop CPU and GPU sales put together...

I don't know what revenue they are generating for Optane, but it is known Intel has been losing money on it for years, and it isn't getting better.

https://www.networkworld.com/article/3452398/storage-trends-what-are-the-best-uses-for-optane.html

unknown-100812056-orig.jpg
 

bit_user

Polypheme
Ambassador
Well we will have to "move on" one day , we cant be stuck using long 16 lanes slots forever just to make people with older cards "happy"
It's yet another hurdle, though. Since PCIe is not a major bottleneck and PCIe 5 will undoubtedly add cost and have other downsides, I don't see it happening.

For a good example of how server technology doesn't always trickle down to consumers, consider how 10 Gigabit ethernet has been around for like 15 years and is still a small niche, outside of server rooms. Meanwhile, datacenters are already moving beyond 100 Gbps.

As for the limitations of the cable length of the TB3 , we do ave eGPU boxes already and I dont think that TB5 will be any difference. expensive cables ? they still have a market.
Oh, but you want them to be 4x as fast? I think it's not technically possible. I guess 100 Gbps Ethernet shows us that you could reach well beyond TB3-DP's 40 Gbps using fiber optics, but those cables aren't exactly consumer-friendly.

And finally , about the notebook docking station , the demand is here but the technology is not yet. PCIe 5.0 will open it .. and the demand will follow , it will change the desktop replacement forever.
The thing is that you could have a docking station with a PCIe 3.0 x16 connection, today. To improve docking station connectivity, you don't need PCIe 5. The market will go for whatever's cheapest and most practical. For the foreseeable future, that will exclude PCIe 5.
 
  • Like
Reactions: TJ Hooker

JayNor

Reputable
May 31, 2019
426
85
4,760
We have already seen Intel's 2021 plans for Aurora, with PCIE5 enabling cache coherent interconnect to multiple gpus via cxl. Does this also make sense as a chiplet interconnect for consumer chips? They reported connecting the amd gpu chiplet in Kaby Lake G with pcie3, so this seems like a reasonable extension. It appears to me they also need to move to a mesh on laptop chips to continue the escalating core count competition, so perhaps the current integrated gpu attachment to the ring bus will need an alternative solution.
 
  • Like
Reactions: bit_user

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
It's yet another hurdle, though. Since PCIe is not a major bottleneck and PCIe 5 will undoubtedly add cost and have other downsides, I don't see it happening.

For a good example of how server technology doesn't always trickle down to consumers, consider how 10 Gigabit ethernet has been around for like 15 years and is still a small niche, outside of server rooms. Meanwhile, datacenters are already moving beyond 100 Gbps.


Oh, but you want them to be 4x as fast? I think it's not technically possible. I guess 100 Gbps Ethernet shows us that you could reach well beyond TB3-DP's 40 Gbps using fiber optics, but those cables aren't exactly consumer-friendly.


The thing is that you could have a docking station with a PCIe 3.0 x16 connection, today. To improve docking station connectivity, you don't need PCIe 5. The market will go for whatever's cheapest and most practical. For the foreseeable future, that will exclude PCIe 5.

1- Again it is not about bottleneck , it is about Freeing more lanes for other cards as I said. if you have 16 lanes of PCIe 5.0 , you can use 8 lanes for GPU as standard and the rest for other cards in non HEDT PCs. and it will be cheaper as well . Today when you use 16 lanes GPU you will consume all the 16 lanes. keep in mind that we are talking about PCI5.0 in which 8 lanes are equal to 32 lanes of PCIe 3.0 .. so it is more than enough for future cards ...

2- TB5/TB4 are coming... and the cables are not big issue , we already have PCIe 3.0 8 lanes and 16 lanes cables . expensive yes , but has a market.

3-are you sayibng the a huge slot of 16 lanes docking station is like a smaller one ? the one that will break easier is the longer one. this is ONE , and TWO , The available Lanes in Mobile CPU are 8/4 lanes in most mobile CPUs ...
 

bit_user

Polypheme
Ambassador
it is about Freeing more lanes for other cards as I said. if you have 16 lanes of PCIe 5.0 , you can use 8 lanes for GPU as standard and the rest for other cards
I get that. But there's the market challenge I mentioned in transitioning to narrower GPU slots, and then there are the physical realities of PCIe 5.0 requiring motherboards with more layers, signal retimers, and generally burning more power.

You're used to technology always getting faster, and basically for free. Because, that's how it's been, for probably your entire life. What you need to understand is that there are limits to the frequencies that can be pushed down a trace on a circuit board (and across a connector). As you near those limits, the technical challenges multiply. Smaller silicon manufacturing nodes won't improve this situation, either. PCIe 5 is just expensive to implement, period. And PCIe 6 won't be any better.

the cables are not big issue , we already have PCIe 3.0 8 lanes and 16 lanes cables . expensive yes , but has a market.
Those are not consumer-friendly, like existing Thunderbolt or USB cables.

If you don't believe me, just wait.
 

TJ Hooker

Titan
Ambassador
Maybe it is faster on ryzen and intel will still release it because they main market for optane is optane RAM DIMMs and intel made the same amount of money from that in the last quarter that AMD made from all desktop CPU and GPU sales put together...
But Optane DIMMs don't use PCIe, so what do they have to do with this article?
 

bit_user

Polypheme
Ambassador
But Optane DIMMs don't use PCIe, so what do they have to do with this article?
I think @TerryLaze 's point is that Intel will release the PCIe SSDs anyway, because they don't really care about that market - they mainly care about the Optane DIMM market.

I could see it both ways. Intel is clearly threatened by AMD's inroads into the datacenter, and providing PCIe 4.0 Optane drives, right now, would only serve strengthen their platform's potential. On the other hand, if the nonvolatile solutions group is sufficiently independent, within Intel, then maybe they're motivated just to ship whatever they can sell.
 
I could see it both ways. Intel is clearly threatened by AMD's inroads into the datacenter, and providing PCIe 4.0 Optane drives, right now, would only serve strengthen their platform's potential. On the other hand, if the nonvolatile solutions group is sufficiently independent, within Intel, then maybe they're motivated just to ship whatever they can sell.
Intel can use optane as ram, there is absolutely no reason to use optane as a disk on a pci slot if you can use it as main ram.
This product is clearly made to target systems that can not use optane as ram at all.
 

bit_user

Polypheme
Ambassador
Intel can use optane as ram, there is absolutely no reason to use optane as a disk on a pci slot if you can use it as main ram.
This product is clearly made to target systems that can not use optane as ram at all.
Two possible reasons: capacity and software support.

You can clearly fit more Optane on a PCIe card than a DIMM. Depending on how many lanes you use per card, you could pack in quite a lot of PCIe cards.

Also, software support for Optane DIMMs is still probably a bit early. Some customers might not yet have a validated stack that fully supports them.
 
Last edited:
  • Like
Reactions: TJ Hooker