"The performance of Nvidia's GeForce RTX 4060 Ti is high enough to make the prototype one of the best graphics cards available. Adding two M.2-2280 slots will surely make it one of the most useful graphics boards." Said no one ever
A PCIe 5.0 router chip with enough lanes to pass 5.0x16 through and enough spares to provide meaningful additional functionality will cost $300+.So a switch that sat in front of a 4090 GPU would allow for 2 PCIe 5.0 M.2 slots, 4 PCIe 4.0 M.2 slots, or 8 PCIe 3.0 M.2 slots to come along for the ride.
Intel consumer platforms only do x8 splits which means for the product to be viable it'd either have to be only 1 NVMe or need a bifurcation chip to handle the lane split.Routing unused lanes from the x16 slot to NVMe slots hardly costs anything, albeit with the caveat that only motherboard-CPU combinations that support PCIe bifurcation on the x16 slot will be able to use them.
Both Intel and AMD used to be able to go down to x8x4x4. Got to love artificial product segmentation where vendors remove features to upsell people.Intel consumer platforms only do x8 splits
Definitely, and I wish all platforms had bifurcation down to x1 (AMD does x2 on Epyc/TR and Intel is still x4 on Xeon) because that might incentivize vendors to include PCIe 5.0 x1 M.2 ports and maybe some drives to go along with them. As much as I hate M.2 as a format this would go along way towards allowing people to easily add storage.Both Intel and AMD used to be able to go down to x8x4x4. Got to love artificial product segmentation where vendors remove features to upsell people.
If bifurcation down to x1 happens on mainstream platform, it will likely be at the expense of USB3.x-gen2(x2)/USB4.x/TB and SATA ports in an HSIO format like it currently does for chipsets.Definitely, and I wish all platforms had bifurcation down to x1 (AMD does x2 on Epyc/TR and Intel is still x4 on Xeon) because that might incentivize vendors to include PCIe 5.0 x1 M.2 ports and maybe some drives to go along with them.
if the CPU only has two PCIe controllers associated with the x16 interface, as is the case with Intel's current LGA1700 chips, it can only be split two ways and there is nothing the BIOS can do about it. If you want to split the PCIe complement four-ways, you need four controllers to keep tabs on each port's resources.From the CPU's point of view, a PCIe lane is a PCIe lane, they really don't care. It's the BIOS that needs to determine which lanes belong to which device ID and enumerate them on bootup.
A lane is a lane. It's up to the bios to determine how those lanes are used.if the CPU only has two PCIe controllers associated with the x16 interface, as is the case with Intel's current LGA1700 chips, it can only be split two ways and there is nothing the BIOS can do about it. If you want to split the PCIe complement four-ways, you need four controllers to keep tabs on each port's resources.
A lane may be a lane but each lane needs to be associated with a bus controller to coordinate each N-lanes group and manage the port's IO. You cannot split PCIe lanes into more chunks than there are controllers available to manage them. Four controllers, four devices max no matter how many PCIe lanes there are.A lane is a lane. It's up to the bios to determine how those lanes are used.
Not interesting you say? Tell that to compact PC owners utilizing Mini-ITX mobos with single PCIe x16 slot and max 2 m.2 NVMe slots. And yes I’m aware that there are ‘newer’ mITX mobos that carry 3x m.2 slots but that’s still 1 less and not to mention sacrificed connectivity elsewhere to provide say extra slot. For mainstream/lite-enthusiasts that do not require the power-hungry RTX-4070 and above GPUs while maximizing PCIe lane utilization on mainstream mITX mobos, this card would be a God send! Like I’ve been wondering since 2018 why GPU designers don’t implement something like this considering there are numerous m.2 expansion cards out there but no hybrid options.Yes the pcie slot provides 75 watts of power, the eight pin 150W and you have the card eating 160W ...leaving an eye watering 65W by ssd standards. More than enough power to run 6 M.2 drives at full tilt (assuming max 10W on writes) if there had been enough PCIe lanes left to use that excess wattage. But not everyone identifying as an enthusiast is going to know all those exact numbers off the top of their head...though they should have some guesstimates in the ball park to be fair. Yet that's also not including those readers who aren't enthusiasts and don't know any better. Point being I'd say its a fair call on the author's part to include such language all things considered. But I do get where you're coming from don't get me wrong.
That said, its not a very interesting product imo. Something with a direct connection to the the GPU would have perked my ears but this is just adding an extra m.2 slot by leeching of the vacant lanes left from a overly gimped gpu. Niche at best and some what of a curiosity therefore but there's not much interesting beyond that. To be fair though we'd need Nvidia involved directly to get something like a direct GPU to onboard m.2 slot/ssd interface, not just an AIB partner.