News Intel Kills VROC Prematurely, Then Changes Course

I can see VROC being sunset sooner rather than later. Outside of a physical appliance there is no need for it on a virtual host as you will have either SDS or a vSAN of some kind. At this point in time there is no reason to not virtualize your entire data center anyways.
 
  • Like
Reactions: thisisaname

Amdlova

Distinguished
For consumer not have any single use, where you will put these nvme when you have 20 pci lanes. Last PC I have build got to look all cheap boards to find one with a extra slot to do somenthing. Has nvme slot everywhere.
 

thisisaname

Distinguished
Feb 6, 2009
799
438
19,260
I can see VROC being sunset sooner rather than later. Outside of a physical appliance there is no need for it on a virtual host as you will have either SDS or a vSAN of some kind. At this point in time there is no reason to not virtualize your entire data center anyways.

It days are still numbers just the announcement was Premature.
 

jp7189

Distinguished
Feb 21, 2012
334
191
18,860
I can see VROC being sunset sooner rather than later. Outside of a physical appliance there is no need for it on a virtual host as you will have either SDS or a vSAN of some kind. At this point in time there is no reason to not virtualize your entire data center anyways.
Software defined is flexible and great if you have many different workloads, or don't know what the loads will be (cloud provider). Hardware is more efficient and performant with the same resources IF the workload is well defined.

My question is... is there a non-SD alternative to VROC? PCIe cards don't come close.
 
Software defined is flexible and great if you have many different workloads, or don't know what the loads will be (cloud provider). Hardware is more efficient and performant with the same resources IF the workload is well defined.

My question is... is there a non-SD alternative to VROC? PCIe cards don't come close.
When you are doing SDS or vSAN (pick your provider flavor) using NVMe SSD, you are not using a physical RAID card (at least since Xeon Scalable v1 or AMD Naples generations). Those systems want just JBOD for your drives as they will assign the writes as needed. Again the only place where VROC would be useful is in a physical appliance (a non virtual server with only one application on it like a DB). However, the use of physical appliances is getting smaller by the day as there is no reason to not have them be virtualized.
 

jp7189

Distinguished
Feb 21, 2012
334
191
18,860
When you are doing SDS or vSAN (pick your provider flavor) using NVMe SSD, you are not using a physical RAID card (at least since Xeon Scalable v1 or AMD Naples generations). Those systems want just JBOD for your drives as they will assign the writes as needed. Again the only place where VROC would be useful is in a physical appliance (a non virtual server with only one application on it like a DB). However, the use of physical appliances is getting smaller by the day as there is no reason to not have them be virtualized.
That didn't answer my question. My specific example is a backup box that moves 111TB (and growing) data set. The data comes from various sources via 4x100Gb NICs, is written locally, and then gets sent to a tape library that requires at least 4.5GB/s to minimize backhitching. We recently tried replacing it with a 6 node SDS. The SDS offers flexibility and easy future expansion, but burns 67% of the raw capacity after factoring in node and cluster redundancies, costs 4x the price for the same capacity, uses up 12x 100Gb switch ports, and is nowhere near the performance.
 
That didn't answer my question. My specific example is a backup box that moves 111TB (and growing) data set. The data comes from various sources via 4x100Gb NICs, is written locally, and then gets sent to a tape library that requires at least 4.5GB/s to minimize backhitching. We recently tried replacing it with a 6 node SDS. The SDS offers flexibility and easy future expansion, but burns 67% of the raw capacity after factoring in node and cluster redundancies, costs 4x the price for the same capacity, uses up 12x 100Gb switch ports, and is nowhere near the performance.
You never mentioned using a backup box. My answer was perfect until you changed the parameters of your question. Your loss of 67% raw doesn't make any sense. The SDS' I've worked with start as a "RAID 10" and you might be able to get different encoding with other licenses. Even in a RAID 10 you only lose half, would be the same with any solution using that encoding, regardless of redundancies. Not to mention the SDS' don't want NVMe drives in a hardware RAID anyways, they want them set in JBOD. Again VROC is only useful for PHYSICAL APPLIANCES.
 

jp7189

Distinguished
Feb 21, 2012
334
191
18,860
You never mentioned using a backup box. My answer was perfect until you changed the parameters of your question. Your loss of 67% raw doesn't make any sense. The SDS' I've worked with start as a "RAID 10" and you might be able to get different encoding with other licenses. Even in a RAID 10 you only lose half, would be the same with any solution using that encoding, regardless of redundancies. Not to mention the SDS' don't want NVMe drives in a hardware RAID anyways, they want them set in JBOD. Again VROC is only useful for PHYSICAL APPLIANCES.
My question hasn't changed a bit "is there a non-SD alternative to VROC?" I'm genuinely curious to know the answer to that.

"RAID10" I assume you mean RF2 here as a 4 disk storage array doesn't go very far. Sure RAID 10/RF2 are 50% in theory, but adding hot spares drops that below 50% in practice.
 
My question hasn't changed a bit "is there a non-SD alternative to VROC?" I'm genuinely curious to know the answer to that.

"RAID10" I assume you mean RF2 here as a 4 disk storage array doesn't go very far. Sure RAID 10/RF2 are 50% in theory, but adding hot spares drops that below 50% in practice.
RAID 10 means mirrored and striped. It is the fastest RAID variety and needs a minimum of 4 disks. Why are you using hot spares with SSD? That is a complete waste of resources. With HDD that was best practice, however, with SSD and the far lower failure rate cold spare is now best practice.

You can use OpenZFS instead of VROC. That uses JBOD for NVMe drives and then does its own encoding for redundancies. Against just like VROC, it is only usable in a physical appliance.
 
Does anyone know if VROC has a "write hole" that's characteristic of non-hardware RAID solutions? The only way I'd pay for it (instead of using Linux' mdraid driver) is if it didn't. For my purposes, mdraid has been plenty fast and reliable.


Does OpenZFS have a "write hole"?
No idea. I've only ever used ZFS for the file system on PFsense routers. I know about OpenZFS because of TrueNAS and like most solutions prefers JBOD for NVMe drives to a RAID setup.
 
  • Like
Reactions: bit_user

TRENDING THREADS