News Leaked Intel Arrow Lake chipset diagram show more PCIe lanes, no support for DDR4 — new chipset boasts two M.2 SSD ports connected directly to CPU

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
D

Deleted member 2731765

Guest
Here is the FULL I/O config for Arrow Lake S, HX, H and Lunar Lake platforms.

Arrow Lake-S desktop series.
Arrow Lake HX high-end Mobile lineup.
Arrow Lake-H mobile parts.
Lunar Lake Mobile SoC.


I guess most of the info should be self explanatory.

Though, we can see that the Arrow Lake-S desktop CPUs will feature 16 PCIe Gen5 lanes dedicated to discrete graphics, on the SOC Tile, whereas the IOE Tile features 4 Gen5 lanes and 4 Gen4 lanes dedicated to M.2 SSDs.


QqqdgxW.jpeg



vpuAmSd.jpeg



TzIxwq3.jpeg



XidyuXr.jpeg
 

bit_user

Titan
Ambassador
Anyway, as we know the upcoming 'Arrow and 'Lunar Lake' CPU lineup would be sporting new "Lion Cove" P-core, and "Skymont" E-core architectures, a recent Intel presentation targeted at PC OEMs highlighted one important aspect of the "Skymont" E-cores.

The slides are too small and blurry, and the original media post also got deleted, but we have few details.

"Skymont" E-core is said to offer a double-digit IPC gain over the "Crestmont" E-core powering the current "Meteor Lake" processor lineup. Not much surprising though.

This double-digit IPC gain over "Crestmont" is achieved through an improved branch prediction unit, a broader 9-wide Decode unit compared to the 6-wide Decode unit of "Crestmont," and an 8-wide integer ALU, compared to 4 Integer ALU on its predecessor.

There is also a dependency optimization in the out-of-order engine, and deeper queuing across the engine, just to name a few optimizations.

Skymont E core:

9-wide Decode
8 integer ALU
Two-digit IPC


DOORGuW7SmHtxhHd.jpg


cAACdk0rSThXyUYX.jpg
As always, thanks for sharing the wealth of info!

Back during Computex deluge, Intel gave a big presentation about Lunar Lake, which covered the Skymont and Lion's Cove changes in a fair bit of detail:
I recommend going through the slides yourself, because I caught a few key errors in the text, such as the claim of Skymont being faster than Raptor Cove (the slides clearly show it's not strictly faster, but does have better IPC).
 

bit_user

Titan
Ambassador
An Arrow Lake desktop CPU benchmark has been leaked which shows up to 20% faster single-thread uplift versus 14900K/KS.
Intel claims 14% better IPC for Lion's Cove. So, if they also got a 5% clock speed bump, then sure.

...however, Intel 4 didn't clock terribly well (which is probably the reason Meteor Lake-S got cancelled). And the thermal density should be even higher on Intel 20A. So, I wouldn't assume a 5% clock speed increase is a given.

My thoughts about Intel going so heavy on IPC is that they might see that pushing ever higher clockspeeds is going to become increasingly problematic, on ever smaller nodes. That's just a hunch, not based on any hard info.

It would be fairly consistent with Zen 5 not pushing higher clockspeeds, but Zen 5 is also moving just to TSMC N4. Where the speculation gets more interesting is that Zen 5C is moving to TSMC N3(B?). So, the key question is: did they keep regular Zen 5 at N4 for thermal density reasons, clockspeed/efficiency reasons, or fab capacity reasons?
 

usertests

Distinguished
Mar 8, 2013
931
840
19,760
It would be fairly consistent with Zen 5 not pushing higher clockspeeds, but Zen 5 is also moving just to TSMC N4. Where the speculation gets more interesting is that Zen 5C is moving to TSMC N3(B?). So, the key question is: did they keep regular Zen 5 at N4 for thermal density reasons, clockspeed/efficiency reasons, or fab capacity reasons?
N4/N4P looks like a fairly minor improvement over N5, not nothing but not helping to drive up clock speeds massively. It looks like AMD has chosen to lean into the efficiency improvement, seen by some TDPs dropping.

I think it was possible that Zen 5 (8-core chiplets) were supposed to be on N3 but got pulled back to N4 because of fab concerns. No proof, just rumor mill.
 

mac_angel

Distinguished
Mar 12, 2008
661
136
19,160
For short runs (under 60ft) Cat5e is fine for 5Gb, and will even do 'OK' with 10Gb at half that, but usually we are talking about last few dozen feet for these routers/switches. For 2.5Gb+, if you're feeding a centralized modem throughout an old house network, it'll struggle past 2.5 beyond 100ft, so you might need to prioritize ports.

Right now, I think you have to work hard to find just Cat5e anymore, almost everything is by default Cat6 for the cheapest and Cat7 in general with Cat8 being the one where you notice the price jump. Are they true to spec, meh, kinda like HDMI/USB/TB/DP compliance, usually good enough, sometimes not, but most will get it done.
You're right, if you're talking about Cat5e. But it is very easy to find Cat6-Cat8 that can handle much longer runs. I finished the basement of the townhouse we live in, and while I did that I ran shielded Cat8 throughout the house and it's suppose to be rated for 40Gbps 2000Mhz. I can't say for 100% that it works at that speed, but that's mostly because of the problem of finding routers, switches, motherboards, etc that can do it. I'd love to be able to transfer a lot of my video files from my Plex server to my gaming computer with the 4090 to be able to work on. Right now I have 3Gb Internet, and an 8 port 2.5Gb switch.
A 10Gb part might cost a bit more at the start, but if all the companies implemented them (routers, switches, motherboards, laptops), that much mass production would bring the cost down easily. I honestly can't figure out why they won't, especially now that, one, storage has advanced in speed many times more than 1Gb. Even mechanical HDDs can go faster than that. A SATA SSD can easily saturate 2.5Gb, and m.2 drives can saturate 10Gb. And two, the technology is there. All other computer components have increased in speed many times over, including WIFI, but they've left ethernet alone, and it is still very widely used.
 
  • Like
Reactions: KnightShadey

bit_user

Titan
Ambassador
I think it was possible that Zen 5 (8-core chiplets) were supposed to be on N3 but got pulled back to N4 because of fab concerns. No proof, just rumor mill.
Would that really be true if Zen 5C went with N3B at roughly the same time? Granted, there's a lot we don't know, but it does make me wonder if N3B isn't such a great node for high-performance dies.
 
  • Like
Reactions: usertests

bit_user

Titan
Ambassador
Right now I have 3Gb Internet, and an 8 port 2.5Gb switch.
ServeTheHome has an Ethernet switch buyers guide with loads of cheap 4x 2.5 + 2x 10G switches, if you only need two ports at 10 GBase-T right now.


Also, check out the switch reviews section of their site, because they're continually reviewing new models.

A 10Gb part might cost a bit more at the start, but if all the companies implemented them (routers, switches, motherboards, laptops), that much mass production would bring the cost down easily. I honestly can't figure out why they won't,
Among other things, power consumption is much higher at 10 GBase-T than at lower speeds. That means you need bigger heatsinks, as well, because it needs to be designed to sustain those speeds. This also makes fanless 10 GBase-T switches nonexistent, as far as I know. A lot of home users are wary of actively-cooled switches, given how loud some of those 1U boxes can be.

All other computer components have increased in speed many times over, including WIFI, but they've left ethernet alone, and it is still very widely used.
Ethernet standards have been advancing, as have network product introductions. The only thing that hasn't changed in the past 5 years or so is the number of PC motherboards with built-in 10 GBase-T. Even 5 years ago, you could find high-end workstation and some server motherboards with it, but the situation hasn't changed much, in that time.
 
Last edited:
  • Like
Reactions: thestryker

KnightShadey

Reputable
Sep 16, 2020
147
88
4,670
On the 10Gbit topic...

When 10GBase-T was first defined, 10 Watts or more per port were needed to make it work, because pretty much like telephone modems they needed to exploit very complex "analog" modulations to push more data through a copper medium that simply wouldn't support higher frequencies.

10 Watts won't work with a notebook, and I remember the first time I powered on a 48-port 10GBase-T switch in the office... like a jet engine taking off from the desk.

Two things to consider for that , first the laptops we're really talking about being higher end, power budget is rarely an issue, even though they laughably do battery tests. Folks like MSI moved from a 300/350 to a 400W power brick and it was lauded, both for the GaN aspect but also the wattage allowing them to get max performance, so I do 't think power budget is an issue for the few laptops I think will even care about Ethernet above 1/2.5Gb, let alone keep an RJ45 port at all. If you listen to an ASUS ROG or Alienware power up, you will get a similar Jet feeling. 😵‍💫

Speaking of alien, I think shielding can also help with alien crosstalk issues for mobile application, and realistically for most laptop scenarios we are talking about very short cable throws, usually 8ft +/- and perhaps a bit longer for folks in a non-standard location (I carry 15ft and 25ft of flat cable with me in my laptop backpack plus various patch cables), but I would say the later length is rare for most people on the go who would prefer to go wireless and don't need to plug into a rack. I think most people would be better served by better quality cables for 2.5Gb than trying to make higher speeds work on older cables, even if they are 'supposed' to be enough.

In the end we are still talking the fringe users wether mobile or desktop, but.... most are willing to do 'whatever it takes' and spend a few more pennies to make it work how they want, so I suspect 10Gb will be the last stop before they just say, 'go TB/USB', whereas desktop is simple with 10+Gb SFP add-in cards having been a thing for many years, and SFP finally showing up on MoBos themselves.

What we need now is optical connectors for everybody!

Yeah, and we are talking about just a few dimes more per user. It's amazing toslink became ubiquitous but optical networking seems to be seen as pie in the sky still, whennit is kinda cheap for the hardware. Yes, it's more expensive/trickier for cabling , but as economies of scale grow SFP+ form factor will become more common so will cabling options, it's already become much cheaper commercially. There are already some high end consumer and consumer-adjascent MoBos that supported it since about 5 year ago (AsRock, SuperMicro, etc IIRC who also already had 10Gb RJ45) and SFP+/SFP28 gets you 1.25Gb -100Gb optical with 1-10Gb RJ45 options as well, you just need to pay for the add-in modules, which are like $25-40 for either @ 10 Gb.
Also it would allow you to upgrade them simply by swapping out the module. It's a simple fix for desktop, a bit harder on mobile. But like I said, I think ubiquitous 5Gb/10Gb RJ45 isn't far off, and likely SFP perhaps after that. The only thing that could derail that is future PCIe/TB/USB making it pointless to specifically support networking vs another dedicated lane and hardware pathway for networking, similar to for SSDs and just eschew built in connectors alltogether.
 
  • Like
Reactions: usertests
I think it was possible that Zen 5 (8-core chiplets) were supposed to be on N3 but got pulled back to N4 because of fab concerns. No proof, just rumor mill.
Honestly I think it was really cost/benefit analysis at work. TSMC stumbled out of the gate with the first N3 node being more complex than they had planned on due to trying to scale down SRAM so they fast tracked the next version without the SRAM scaling. In turn all public word has been that N3 is the biggest price jump TSMC has had for a leading edge node.
Would that really be true if Zen 5C went with N3B at roughly the same time? Granted, there's a lot we don't know, but it does make me wonder if N3B isn't such a great node for high-performance dies.
I think we'll potentially find an answer to this if AMD confirms the highest core count Zen 5 servers still use N4/5 since there's a 16 core CCD for regular Zen 5.
 
  • Like
Reactions: KnightShadey

KnightShadey

Reputable
Sep 16, 2020
147
88
4,670
You're right, if you're talking about Cat5e. But it is very easy to find Cat6-Cat8 that can handle much longer runs.

Oh yeah, I meant specifically that, it's far more work to find something that is ONLY Cat5e nowadays, by default pretty much everything is at least 6, and even that is changing, because the cost to mfr cat7 is inconsequential vs being stuck having inventory of old cable. Cat 8 is still a leap in quality and a commensurate price increase for that too, so I mentioned it as the "you notice" when buying aspect vs the others.

We buy bulk remainded cable of people to bury on the mountain for Ski racing, because we don't even need 5e as old 5 is good enough for our analogues continuity circuits for timing eyes even over multi-KMs. The only challenges are cable breaks due to frost-heave, extreme lows (-40 or below).... and BEARS (both Black & Grizzly) who love the taste of insulating gel & dielectric grease and digging & playing. A grizzly destroyed and 80-pair bundle once, took us almost 2 full days to splice & test. So we replace long runs quite often, and unfortunately (and for good reason) we can't just 'lose' supplies from work, which have long since exceeded our requirements.

A 10Gb part might cost a bit more at the start, but if all the companies implemented them (routers, switches, motherboards, laptops), that much mass production would bring the cost down easily. I honestly can't figure out why they won't, especially now that, one, storage has advanced in speed many times more than 1Gb. Even mechanical HDDs can go faster than that. A SATA SSD can easily saturate 2.5Gb, and m.2 drives can saturate 10Gb. And two, the technology is there. All other computer components have increased in speed many times over, including WIFI, but they've left ethernet alone, and it is still very widely used.

Oh yeah, it's a fairly simple task, SFP+ is the easy existing solution and already available on desktops as an add-in cards (with much more flexible future with copper and optical options) and seems to be slowly showing up as on-board options, even in SFF application it seems (just did a quick look after posting above and there are many more options in the cosnsumer space than I thought which have bled through from commercial applications, but still kinda fringe for the most part).
So I suspect given a couple of years it may become more commonplace, unless.... as I mention above they just go the TB/USB route instead, but if so it should get a dedicated lane like SSDs. I think the end of the line for Mobile is 10Gb before that split, and we might even stop short of that if TB5 makes a compelling enough alternative for even the mobile enthusiasts.
 
  • Like
Reactions: abufrejoval

abufrejoval

Reputable
Jun 19, 2020
584
424
5,260
ServeTheHome has an Ethernet switch buyers guide with loads of cheap 4x 2.5 + 2x 10G switches, if you only need two ports at 10 GBase-T right now.
There is quite a lot of these 2.5Gbit switches with a single 10Gbit uplink (SFP or 10Base-T) right now, evidently all using the same RealTek switch chip, completely passive and cheap. I'm using several of these as clusters in the house, where there is more than a single device in a room.
Among other things, power consumption is much higher at 10 GBase-T than at lower speeds. That means you need bigger heatsinks, as well, because it needs to be designed to sustain those speeds. This also makes fanless 10 GBase-T switches nonexistent, as far as I know. A lot of home users are wary of actively-cooled switches, give how loud some of those 1U boxes can be.
Netgear has been selling 8-port Nbase-T XS508M switches capable of 10Gbase-T on all ports that include a fan that I'd label as "unnoticeable".

It also has one port that can use SFP, for uplink or downlink. Having these dual personality 10Gbit ports included for free makes for a lot of flexibility for larger networks and can even save a bit of power e.g. when you connect the 10Gbit link to the 2.5 switch via a very cheap (when short) direct connect SFP cable.

Mostly it opens those Netgears up to using e.g. a 25 or 40 Gbit SFP switch with quad break-out cables (which is slightly beyond my means and needs).

Originally I started with a 12-port BS-MP2012 NBase-T from Buffalo around 10 years ago, which was affordable enough, but quite noisy with small high-speed fans.

I quieted those down as per some STH hints with Noctua fans, which is a bit borderline. It has dual Aquantia 6x switch chips inside and only Aquantia based 10GBit NICs from the hosts, meaning Green Ethernet and lowest negotiated power all around and perhaps for that reason still working just fine after probably 10 years now. With all eight ports going at max 10 Watt 10Gbase-T power, those quiet Noctua fans might not have been enough to have it survive.

I find it much easier to recommend the those Netgear XS508M 8-port switches which are zero trouble and so low noise at a budget that I found acceptable I've just added more of those and accepted that several 8-port switches aren't as great a network than one bigger switch (or some aggregate topology), but since I'm the only real power user in that network, it's fine for me.

As long as you can work with SFP direct connect exclusively (3 meters or less), prices have more or less equaled out on NICs and entry level switches, while power consumption is a bit less.

In my case that started too late and I wanted the flexibility of working with copper Ethernet cables exclusively.
Ethernet standards have been advancing, as have network product introductions. The only thing that hasn't changed in the past 5 years or so is the number of PC motherboards with built-in 10 GBase-T. Even 5 years ago, you could find high-end workstation and some server motherboards with it, but the situation hasn't changed much, in that time.
Aquantia set out to change the 1-10Gbase-T market completly with unusually low-power PHYs and the NBase-T flexibility. They didn't make friends with the old 10Gbit crowd that had hoped to make tons of money via faster networking and smarter VM and storage enabled unified smart NICs

The latter resulted in a blod-bath, because nobody wanted to risk Ethernet and fiber channel on a single wire and VMware got sick and tired of driver bugs, opting for a software-only approach for the entire 10Gbit generation. A lot of vendors merged, consolidated or disappeared over the next years. While Aquantia only got aquired late, it never experienced the growth their uniquely cheap and energy efficient offer should have practically guaranteed.

I've heard tons of harsh comments about the quality of their hardware and drivers in FreeNAS communities, but while there stuff is zero "smart"-NIC, I've had zero trouble with them and they are very well supported in all Linux and Windows, much better than Intel these days.

I can only assume that Aquantia's 'rightful' expansion in the multi-Gbit Ethernet marked sufferd from a lot of Intel headwind, who for the longest time didn't want intermediate speeds dilluting their 10Gbit margins and only conceeded very late and badly (in terms of quality) into the 2.5 Gbit market, but not with Aquantia NICs.

One of the bigger issues has been PCIe bandwidth needs, where a single lane of PCIe v2 was actually a good match for 2.5Gbit Ethernet. Sacrificing a full x4 slot to a faster network was never an easy choice, because there aren't a lot of those on desktop mainboards and usually I reserved those for RAID controllers. With NVMe eliminating those it's an easier choice, but at PCIe v4 a single lane should be quite enough for 10Gbit. And Aquantia even has had a chip ready for that for years.

But again, for reasons I'd really like to become more publically known, NICs with these chips are extremely hard to buy even years later. And then for some stupid reason I can find no kind words for, all of them come on two lane PCIe cards that won't fit into x1 slots which again aren't open at the rear for some $%# reason. Yes, at PCIe v3 you need two lanes, but why not just make that a different SKU? And I've never seen an x2 slot, it's either x1 or x4.

Sacrificing a PCIe v5 x4 slot for a NIC that requries a single v4 lane to function is so painful, I'd rather pay extra to waste a 20Gbit USB port I can't do anything meaningful with instead.

With the acquisition by Marvell the Aquantia portfolio seems almost undiscoverable. I have no idea if they fired the design team and let the chips run until the market dries up, but they certainly don't seem eager to let the consumer market know there is an economic, working and very compatible product out there.

Perhaps they are just waiting for Intel to grab that market as soon as they try, but at least RealTek is now doing 5Gbit.
 
  • Like
Reactions: KnightShadey

bit_user

Titan
Ambassador
Quite frankly, 10Gbit/s would have been great to have 10 years ago. Today with NVMe storage rarely slower than 50Gbit/s, that's still a bottleneck even on a laptop.
I don't understand the logic behind this statement. It's like you're saying that because a file-copy can't saturate the laptop's SSD write speed, you'd rather limp along with 1 Gigabit (or whatever you can squeeze over wifi) than accept the order of magnitude improvement on offer, in the form of proper 10GBase-T.

Also, forgive me if I don't totally buy the laptop "power" argument. Most times, when someone can jack into hard-wired Ethernet, they can also plug into A/C power. 20 years ago, I had a laptop with builtin 10/100 Ethernet and therefore needed to rely on a PCMCIA card adapter for Gigabit (except it didn't run all that close to a full 1 Gbps). Let me tell you, that card got hot!
 

Bluoper

Prominent
Sep 5, 2023
41
44
560
There isn't an isp that would supply above 1Gb home internet in a 100 mile radius of my house. 10Gb on the motherboard isn't something %95 of general users would get benefit from, reduced cost would be more impactful for them.
 

bit_user

Titan
Ambassador
Also we still have no official word on if HTT is really gone on ALL models, they might have some with and some without.
Intel's own presentation material on Lion's Cove said all the supporting hardware was removed from the client version of the core, in order to achieve the greatest density and performance benefits.

d743MPmtAGFZpWgwcv5HDL.jpg

They said it will still be available in the server version of the P-cores.
 

bit_user

Titan
Ambassador
There isn't an isp that would supply above 1Gb home internet in a 100 mile radius of my house. 10Gb on the motherboard isn't something %95 of general users would get benefit from, reduced cost would be more impactful for them.
It seems like in virtually every discussion about 10 GigE, someone inevitably makes this point. For like the billionth time: 10 GBase-T is about LAN, not internet! If you don't copy big files between your PCs at home, just forget about it and ignore the conversation!

The same was true of Gigabit, when it first rolled out. Virtually nobody had home internet at 1 Gbps, yet it didn't stop Gigabit from dominating the mainstream. I bought a mainstream Pentium 4 motherboard with builtin GigE, back in 2004. At the time, my home internet was probably about 15 Mbps, and yet the cost of that motherboard wasn't exorbitant and 5-port GigE switches could be purchased for < $100.
 
Last edited:
  • Like
Reactions: usertests

bit_user

Titan
Ambassador
It's amazing toslink became ubiquitous but optical networking seems to be seen as pie in the sky still, whennit is kinda cheap for the hardware. Yes, it's more expensive/trickier for cabling , but as economies of scale grow SFP+ form factor will become more common so will cabling options, it's already become much cheaper commercially.
Toslink cables are mostly plastic and super low-bandwidth. Optical network cables must use glass fibers and that's not very consumer-friendly.

But yeah, I don't really know the thinking behind Toslink. AFAIK, it was contemporaneous with S/P-Dif, as the signal structure is even the same. Maybe it was like a 1980's era, "lasers -> cool" thing. Even the electrical isolation argument doesn't have much play, since S/P-Dif inputs typically use isolation transformers to avoid ground loops.
 

bit_user

Titan
Ambassador
With the acquisition by Marvell the Aquantia portfolio seems almost undiscoverable. I have no idea if they fired the design team and let the chips run until the market dries up, but they certainly don't seem eager to let the consumer market know there is an economic, working and very compatible product out there.
STH recently reviewed a couple cheap AQC113C-based cards, but they're the PCIe x4 variety you hate.

 

KnightShadey

Reputable
Sep 16, 2020
147
88
4,670
Toslink cables are mostly plastic and super low-bandwidth. Optical network cables must use glass fibers and that's not very consumer-friendly.

Yes I know, I've been working with it for decades (even as an installer in the late 90s early 2000s before moving to mobility), and mention that very challenge previously in this thread.

I was focusing on the adoption attitude, whereas toslink vs coax digital was "OK , no problem" (and they were still more fragile than RCA cable) vs current optical still being seen as fringe/magic, whereas in reality once installed in homes those challenges are done with. Many new builds are extending fibre to the curb to fibre to the jack, with some municipalities making it a requirement for new builds,

For portable computing sure, that's why I said most people are usually dealing with 8ft +/- (occasionally more) of truly moving cables where even the 10Gb concern crosstalk is minimal so that's a valid concern but limited.

However, at the platform level BOTH copper and optical can be dealt with by SFP now for those willing to pay for the gear, again this isn't Joe blow, and ma & pa, it's enthusiasts who know and can deal with most challenges if the benefits are worth the effort/cost.
Even then I see the utility to the consumer market as slim and short lived as USB/TB makes it less important. I just hope it is properly implemented instead of just an afterthought of 'capability in addition to everything else'. Sure I don't want a dongle on my/our laptops now, but that's likely a required future past possibly even 5Gb.
 
SFP is the solution to 10Gb and I'm glad me being cheap and buying SFP cards around a decade ago caused me to learn more about it. There's really no reason to not use it for the vast majority of installations most consumers use. 2.5Gb is plenty for wireless backhaul and that can be connected to a 2.5Gb + 10Gb SFP switch without issue.
With NVMe eliminating those it's an easier choice, but at PCIe v4 a single lane should be quite enough for 10Gbit. And Aquantia even has had a chip ready for that for years.
There are a ton of NICs available based on AQC113, but I've seen none in x1 format. Of course the reason why is that no motherboard has PCIe 4.0 x1 slots and it's plausible that implementation costs more than reusing PCBs.
 

bit_user

Titan
Ambassador
SFP is the solution to 10Gb and I'm glad me being cheap and buying SFP cards around a decade ago caused me to learn more about it.
One thing a lot of people might not know is that the SFP+ copper cables are coded to work only with certain controllers. I'm not sure if there's now a de facto standard, but it used to be a thing that if you got the wrong cable or couldn't find a cable that was mutually compatible with the devices at both ends, you were dead in the water.

There's really no reason to not use it for the vast majority of installations most consumers use.
Cost is still higher. For cheap, mass market consumer stuff, 2.5 Gbps makes sense to me as the new default.
 
  • Like
Reactions: thestryker
One thing a lot of people might not know is that the SFP+ copper cables are coded to work only with certain controllers. I'm not sure if there's now a de facto standard, but it used to be a thing that if you got the wrong cable or couldn't find a cable that was mutually compatible with the devices at both ends, you were dead in the water.
Yeah it was definitely that way when I bought my 520 cards, but when I got a 710 I needed a longer cable and all of them hit every major manufacturer. I'm guessing the proprietary was never going to stand after the mess it made for early 10Gb adoption. While I'd much prefer a singular standard being forced into a de facto standard is good enough.
Cost is still higher. For cheap, mass market consumer stuff, 2.5 Gbps makes sense to me as the new default.
Oh yeah I agree 10Gb for all absolutely isn't necessary, but if you're someone who can use it going SFP isn't a problem.
 
  • Like
Reactions: KnightShadey

Bluoper

Prominent
Sep 5, 2023
41
44
560
It seems like in virtually every discussion about 10 GigE, someone inevitably makes this point. For like the billionth time: 10 GBase-T is about LAN, not internet! If you don't copy big files between your PCs at home, just forget about it and ignore the conversation!

The same was true of Gigabit, when it first rolled out. Virtually nobody had home internet at 1 Gbps, yet it didn't stop Gigabit from dominating the mainstream. I bought a mainstream Pentium 4 motherboard with builtin GigE, back in 2004. At the time, my home internet was probably about 15 Mbps, and yet the cost of that motherboard wasn't exorbitant and 5-port GigE switches could be purchased for < $100.
Your talking about a very specialized use case. The discussion beforehand was about 10Gb being on general use consumer motherboards. The majority of standard consumers don't have a home server/nas so 10Gb being on the average motherboard would be a pointless to most people.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
Your talking about a very specialized use case. The discussion beforehand was about 10Gb being on general use consumer motherboards.
IMO, it's not that specialized, but also not typical.

The majority of standard consumers don't have a home server/nas so 10Gb being on the average motherboard would be a pointless to most people.
As I said above, I think 2.5 GbE should be the new standard. I'll go further and suggest that maybe we're about to see 5 GbE becoming more common on enthusiast boards, given this news:

 
  • Like
Reactions: Bluoper

mac_angel

Distinguished
Mar 12, 2008
661
136
19,160
ServeTheHome has an Ethernet switch buyers guide with loads of cheap 4x 2.5 + 2x 10G switches, if you only need two ports at 10 GBase-T right now.

Also, check out the switch reviews section of their site, because they're continually reviewing new models.


Among other things, power consumption is much higher at 10 GBase-T than at lower speeds. That means you need bigger heatsinks, as well, because it needs to be designed to sustain those speeds. This also makes fanless 10 GBase-T switches nonexistent, as far as I know. A lot of home users are wary of actively-cooled switches, given how loud some of those 1U boxes can be.

https://www.amazon.ca/dp/B09M7KSZB2...=1CPX5VNFAOQQZ&ref_=list_c_wl_lv_ov_lig_dp_it

My next wish list item. Picking up a switch with only 2 10Gb ports isn't worth it when I already have an 8 port 2.5Gb
Ethernet standards have been advancing, as have network product introductions. The only thing that hasn't changed in the past 5 years or so is the number of PC motherboards with built-in 10 GBase-T. Even 5 years ago, you could find high-end workstation and some server motherboards with it, but the situation hasn't changed much, in that time.
I know Ethernet has advanced. Not nearly as much as other computer components, but it has been advancing. My complaint is that they haven't made it mainstream. It wasn't until 2016 until the 2.5Gbe came out, and that was only seen in high end servers, etc. Yet a regular SATA SSD could write faster than that, meaning that even if you had that Ethernet, that would still be your bottleneck. There were also 5Gbe released at the same time, but they were even more rare. Very rare to see in a main stream motherboard (even Z series), though you'd see it in the HEDT models (I have a X299 motherboard with one). And, for the most part, these would still be useless for small businesses or home users because you'd still need the backbone (switch, other computers, etc.) to be able to use it. $500+ routers might have 1 or 2 ports that will do it, but that still ends up being useless without being able to connect with other devices.
It should have been like every other standard. PCIe upgrading over the years, and every motherboard, and components that plug into it following suit. Same with RAM; DDR2, 3, 4, 5, etc.
 

bit_user

Titan
Ambassador
It should have been like every other standard. PCIe upgrading over the years, and every motherboard, and components that plug into it following suit. Same with RAM; DDR2, 3, 4, 5, etc.
All of your examples were only doing about 2x per generation. Ethernet had the (perhaps unfortunate) precedent of increasing 10x per generation. The gap between gigabit and 10gig just proved too costly, it outpaced complementary technologies like storage, and there wasn't a consumer need.

Also, I just need to set the record straight: PCIe did not progress at such an even pace. The jump from 3.0 to 4.0 took 7 years:

xFigure1.png.pagespeed.ic.Na79TxWD9C.png

If you look at how long it took PCIe to increase 10x, it took about 15 years!! I'll bet DRAM probably works out to something similar.