News Asus Intros a Quad M.2 PCI-Express x16 4.0 Adapter for Very Fast NMVe Storage

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
Intel Fanboys: 15.75 GB/s ought to be enough for anyone.

LOL, thats the theoretical maximum of a 16x pic-e slot (3.0)

The Intel i7-9700K only has 16 pci-e 3.0 slots.

Plop one of those babies in (assuming it would work in an Intel system, it won't) and you have just saturated the entire Intel PCI-e bus. Doesn't leave much for your GPU.
 

Makaveli

Splendid
The 9700k cpu itself provides 16 lanes .

The intel chipset provides 24, DMI3 connection to the CPU which has 3.9GB/s of bandwidth

So ya I wouldn't be using this on an intel system amd only.
 

mac_angel

Distinguished
Mar 12, 2008
560
80
19,060
I'd think creating a PCIe Gen3 card to support Gen4 m.2 drives would make more sense. Have a controller on the card to be able to use 8 lanes of Gen3 PCIe (or 16 lanes for a dual card).
 

popatim

Titan
Moderator
This adapter should work in a Gen3 slot and the gen4 M2's drives do work in an gen3 board, they downgrade to the gen3 protocol.

The problem is where do you install this? Your GPU slot? Nobody has 32 lanes from the CPU in the consumer market...

You could use a chipset x16 slot but then be limited to the DMI interface for any data going to/thru the CPU and that is just a pcie x4 connection so all that M2 performance goes right out the window... And even AMD's X570 boards don't typically offer one x16 slot with all 16 lanes there. If you drop it in an 8 lane slot, you lose 2 of the m2 drives...

While benchmarks will be drool worthy for sure, I don't see a real market for this outside of serious workstations.
 

PBme

Reputable
Dec 12, 2019
61
37
4,560
This adapter should work in a Gen3 slot and the gen4 M2's drives do work in an gen3 board, they downgrade to the gen3 protocol.

The problem is where do you install this? Your GPU slot? Nobody has 32 lanes from the CPU in the consumer market...

You could use a chipset x16 slot but then be limited to the DMI interface for any data going to/thru the CPU and that is just a pcie x4 connection so all that M2 performance goes right out the window... And even AMD's X570 boards don't typically offer one x16 slot with all 16 lanes there. If you drop it in an 8 lane slot, you lose 2 of the m2 drives...

While benchmarks will be drool worthy for sure, I don't see a real market for this outside of serious workstations.
I has to be aimed at Threadripper/TR4 crowd, especially as it is the only way to run it at 16x (if that is really needed vs 8x). But that makes sense as folks who might actually benefit from this, and has this level of money they are willing to spend on a PC, are more likely those who who are buying the new Threadrippers.
 

InvalidError

Titan
Moderator
While benchmarks will be drool worthy for sure, I don't see a real market for this outside of serious workstations.
If your workstation needs to read/write at 16+GB/s on a regular basis, you probably need more RAM to keep larger chunks of whatever you are working on in-memory or cached so you don't have to rely as heavily on IO.
 
LOL, thats the theoretical maximum of a 16x pic-e slot (3.0)

The Intel i7-9700K only has 16 pci-e 3.0 slots.

Plop one of those babies in (assuming it would work in an Intel system, it won't) and you have just saturated the entire Intel PCI-e bus. Doesn't leave much for your GPU.
I was attempting to make a pun referencing a quote Bill Gates supposedly said.

640K (memory) ought to be enough for anyone.


Obviously 640k wasn't enough for everyone.

Intel's inability/reluctance to release a PCI-E 4.0 chipset shows how behind the curve they are.

Even the latest and greatest 10 generation CPUs; i9-10980XE, ... are still PCI-E 3.0
 

Ncogneto

Distinguished
Dec 31, 2007
2,355
53
19,870
This adapter should work in a Gen3 slot and the gen4 M2's drives do work in an gen3 board, they downgrade to the gen3 protocol.

The problem is where do you install this? Your GPU slot? Nobody has 32 lanes from the CPU in the consumer market...

You could use a chipset x16 slot but then be limited to the DMI interface for any data going to/thru the CPU and that is just a pcie x4 connection so all that M2 performance goes right out the window... And even AMD's X570 boards don't typically offer one x16 slot with all 16 lanes there. If you drop it in an 8 lane slot, you lose 2 of the m2 drives...

While benchmarks will be drool worthy for sure, I don't see a real market for this outside of serious workstations.


But the 570 boards with a x8 slot (4.0) have the same bandwidth as the intel boards with a pci-3 (3.0) slot. So, in theory, your graphics card should be just fine in a 570 boards x8 slot.
 

PBme

Reputable
Dec 12, 2019
61
37
4,560
But the 570 boards with a x8 slot (4.0) have the same bandwidth as the intel boards with a pci-3 (3.0) slot. So, in theory, your graphics card should be just fine in a 570 boards x8 slot.
If the graphics cards are PCIE 3, not 4, if it is being dropped to 8x, it should be the same to them as 8x 3.0 as it can't take advantage of the 2x speed of a 4.0 lane. But the reality is that only top end graphics cards, in specific workloads/games see any performance impact at 8x vs 16x currently so it isn't something to worry about for most yet. I'd assume Nvidia's cards this year are all 4.0 enabled.

Only 1 board has an x8/x8 arrangement that I'm aware of but it's $650 (MSI God-like)

My Asrock Creator (which was $500 - but that was because of Thunderbolt 3 and 10g built in, not any special PCIE awesomeness) is 8x, 8x. There seems to be quiet a few, if not most, x570 boards that are 8x 8x with cards in the primary slots. But some things, like populating a second or third m.2 on the motherboard could drop that depending on the board.
 

InvalidError

Titan
Moderator
But the reality is that only top end graphics cards, in specific workloads/games see any performance impact at 8x vs 16x currently
The 4GB RX5500 begs to differ with up to 100% performance improvement (double the frame rates) between 4.0x8 and 3.0x8. AMD screwed up real bad by making the RX5500 x8-only while the majority of systems that might use it can only do so at 3.0 speeds, it really needed to support x16.