Asus X99-E-10G WS Motherboard With 10 Gbit Ethernet Controller Makes NA Debut

Status
Not open for further replies.
This seems less than useful for the vast majority of users. :) 10G networking? why? Not saying nobody can use it, but I would think such users are few and far between.
 
Interesting, given the high MSRP and being geared towards workstation use. I'd think they'd have added Thunberbolt 3 over USB C. I can see why they dropped SATA Express ports as no one uses them. U.2 is very difficult to find. I'd think someone building their own workstation would use easier to find M.2 and PCI-e SSD. The lanes for that U.2 would be better served for Thunderbolt 3 to use with 6+ bay RAID enclosures.
 


Until routers come with 10Gbe ports and is affordable it is not useful for 99% of users. Even most companies have yet to move to 10Gbe networks and use it only for connecting SANS to thin servers.
 
I would only build a system with this board with ECC memory and also a Xeon processor that supports ECC memory. But the total cost will like skyrocket to over $2000 with 32GB DDR4 ECC memory
 

You don't need a router with 10Gbps ports, only a switch with enough 10Gbps 'uplink' ports to accommodate what few computers and servers on your LAN you actually need 10Gbps from. The rest of the LAN including the router can stay at 1Gbps.

I have a similar setup but in reverse with my essential stuff (two main PCs and VoIP adapter) plugged into my router's 1GbE ports and all of my unessential stuff connected to an SRW224G4 switch (4xGbE + 24x100M, bought that some 10 years or so ago) with one of the GbE ports used for the router uplink.
 
lol yea, 10Gbps switches are still pricey. newegg shows cheapest is 300$, but that switch is way overboard for consumers with tons of ports. They need to release consumer/prosumer level switch or a router with 10Gbps ports. TBH, they only need like 1 or 2 ports on a router for someone's media server(would be enough for me). Doubt anything else on a prosumers network is maxing 1Gbps out unless they are into heavy transcoding/video work. But I'd love to throw my freenas server onto a 10Gbps link for multiple connections.
 


But again your setup is of one who knows that they are doing and could benefit from it. I could benefit if I had a server with a 10Gbe NIC as it would allow for multiple 1Gbe connections at once instead of having to share bandwidth.

We are, however, the minority I that. The majority wont benefit until their ISPs router or whatever router they buy supports 10Gbe and even then I don't even think there is much that normal consumers do that will saturate a 1Gbe link let alone a 10Gbe link.
 
Nice. Except for the price, that is. The 1 Gigabit version is about double what I paid for my Supermicro Socket-2011 workstation board, nearly 4 years ago. It still works perfectly.

My biggest issues with 10 GBE switches isn't price (there are models cheaper than this board!), but rather noise. Since they're 1U and hot, that means loud fans.
 
Not that I'm the target market here, because I'm poor as dirt, but I could imagine this could be useful if you had a really nice NAS set-up as well. Get anything loaded in a heartbeat.
 
ASRock version of this board has been available for quite some time now, it is cheaper and well made. I'd say buying this over ASRock version is waste of money.
 


Actually, it's soooo far of an overkill for a NAS it's not even funny. If you want to make a nice, fast, responsive NAS, you can run anything around a Pentium G3000 series and be done with it for a LOT less money and power usage. I'm running a G3220/8GB/B85 setup with a bunch of drives in RAID5 and it's plenty fast (90+MB/sec on a Gbit network).

 


And it also uses a Molex connector for extra power instead of a 6 pin PCIe like it should.

To each their own. I think the Asus looks better and like it using a PCIe connector for additional PCIe power instead of molex but both are good boards and 10Gbe is still useless for the majority of people.
 
This board obviously isn't for the majority of people.

The Phenom II-based fileserver I built in 2010, with a software RAID 6 (5x 1 TB 7200 RPM drives), could sustain read speeds of 350 MB/sec. If I were to update that with a faster CPU and probably 6 TB drives, well, you can just use your imagination.

Now, a more typical use case is someone reading data off a windows share of a SSD-equipped PC. Even with just a SATA 3 SSD, it wouldn't be unreasonable to see a 5x speedup over Gigabit Ethernet. If NVMe, the 10 GigE would be the bottleneck!
 
It would've been even cooler if they instead fitted the MoBo with a pair of QSFP+ slots, that way one could fit the system with 100GbE super duper laser fiber optics communication that can be used for toasting bread when not in use.

I guess it could be useful to someone who wants to set up a relatively cheap server with high bandwidth. Otherwise, I'll leave the discussion on how useful it would be to the average Joe to someone else.
 
I think Intel would prefer people use OmniPath, when you get that high-end.

The other obstacle you face is all the PCIe 3.0 lanes it would take to keep those completely fed. Even using just 16 PCIe 3.0 lanes would be problematic, as there are certainly a lot more potential customers who'd run multi-GPU than 100 GbE.

Oh, and then there's the small fact that dual 100 Gigabit cards cost more than this board!
 
Status
Not open for further replies.