Asus revealed the Hyper M.2 X16 GEN 4 Card at CES 2020.
Asus Intros a Quad M.2 PCI-Express x16 4.0 Adapter for Very Fast NMVe Storage : Read more
Asus Intros a Quad M.2 PCI-Express x16 4.0 Adapter for Very Fast NMVe Storage : Read more
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
Intel Fanboys: 15.75 GB/s ought to be enough for anyone.Seems not so long ago Intel fanboys were talking about how useless PCI 4.0 was.
Intel Fanboys: 15.75 GB/s ought to be enough for anyone.
My friend told me I'd get 210% more framez in Minecraft.While benchmarks will be drool worthy for sure, I don't see a real market for this outside of serious workstations.
I has to be aimed at Threadripper/TR4 crowd, especially as it is the only way to run it at 16x (if that is really needed vs 8x). But that makes sense as folks who might actually benefit from this, and has this level of money they are willing to spend on a PC, are more likely those who who are buying the new Threadrippers.This adapter should work in a Gen3 slot and the gen4 M2's drives do work in an gen3 board, they downgrade to the gen3 protocol.
The problem is where do you install this? Your GPU slot? Nobody has 32 lanes from the CPU in the consumer market...
You could use a chipset x16 slot but then be limited to the DMI interface for any data going to/thru the CPU and that is just a pcie x4 connection so all that M2 performance goes right out the window... And even AMD's X570 boards don't typically offer one x16 slot with all 16 lanes there. If you drop it in an 8 lane slot, you lose 2 of the m2 drives...
While benchmarks will be drool worthy for sure, I don't see a real market for this outside of serious workstations.
If your workstation needs to read/write at 16+GB/s on a regular basis, you probably need more RAM to keep larger chunks of whatever you are working on in-memory or cached so you don't have to rely as heavily on IO.While benchmarks will be drool worthy for sure, I don't see a real market for this outside of serious workstations.
I was attempting to make a pun referencing a quote Bill Gates supposedly said.LOL, thats the theoretical maximum of a 16x pic-e slot (3.0)
The Intel i7-9700K only has 16 pci-e 3.0 slots.
Plop one of those babies in (assuming it would work in an Intel system, it won't) and you have just saturated the entire Intel PCI-e bus. Doesn't leave much for your GPU.
This adapter should work in a Gen3 slot and the gen4 M2's drives do work in an gen3 board, they downgrade to the gen3 protocol.
The problem is where do you install this? Your GPU slot? Nobody has 32 lanes from the CPU in the consumer market...
You could use a chipset x16 slot but then be limited to the DMI interface for any data going to/thru the CPU and that is just a pcie x4 connection so all that M2 performance goes right out the window... And even AMD's X570 boards don't typically offer one x16 slot with all 16 lanes there. If you drop it in an 8 lane slot, you lose 2 of the m2 drives...
While benchmarks will be drool worthy for sure, I don't see a real market for this outside of serious workstations.
If the graphics cards are PCIE 3, not 4, if it is being dropped to 8x, it should be the same to them as 8x 3.0 as it can't take advantage of the 2x speed of a 4.0 lane. But the reality is that only top end graphics cards, in specific workloads/games see any performance impact at 8x vs 16x currently so it isn't something to worry about for most yet. I'd assume Nvidia's cards this year are all 4.0 enabled.But the 570 boards with a x8 slot (4.0) have the same bandwidth as the intel boards with a pci-3 (3.0) slot. So, in theory, your graphics card should be just fine in a 570 boards x8 slot.
Only 1 board has an x8/x8 arrangement that I'm aware of but it's $650 (MSI God-like)
The 4GB RX5500 begs to differ with up to 100% performance improvement (double the frame rates) between 4.0x8 and 3.0x8. AMD screwed up real bad by making the RX5500 x8-only while the majority of systems that might use it can only do so at 3.0 speeds, it really needed to support x16.But the reality is that only top end graphics cards, in specific workloads/games see any performance impact at 8x vs 16x currently