Not an expert in this at all, but was thinking in how memory modules are more expensive the faster they are, so in reverse, developing slower modules but with more capacity.
Again not an expert, so that might be absolutely flawed thinking on my part.
Not really, you have to understand how NVME is made and what it really is.
Basic generic architecture.
System ChipSet (now this is inside CPUs) -> PCI/PCI-X/PCI-e BUS -> Host Bus Adapter -> Drive(s).
So back in the day we would have a SAS RAID5 disk array.
ChipSet -> PCI-X 133 -> SAS HBA -> 6x SAS cables -> 6x SAS drives.
All six drives could be accessed simultaneously and data was stripped across all six of them. Parity information took 1/6 of the bandwidth so we had an effective bandwidth of 5 drives.
NVME does something similar only everything from the HBA forward is all integrated into a single device.
ChipSet -> PCIe -> NVMe
Inside the NVMe it's
HBA controller -> parallel circuit paths -> NVFLASH memory chips.
If we have two paths to two NVFLASH chips then we have the effective bandwidth of two chips. If we have four paths to four NVFLASH chips, then we have the effective bandwidth of four chips. If we have five or six paths to five or six chips, then that is the bandwidth of five to six chips. The wider your path the more complicated your controller has to be and the more chips you need to to use, which makes the whole thing more expensive. You could do something like two paths to four chips (two each), but your still spending similar money for half the bandwidth.