News Leaked Intel Arrow Lake chipset diagram show more PCIe lanes, no support for DDR4 — new chipset boasts two M.2 SSD ports connected directly to CPU

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

mac_angel

Distinguished
Mar 12, 2008
661
136
19,160
All of your examples were only doing about 2x per generation. Ethernet had the (perhaps unfortunate) precedent of increasing 10x per generation. The gap between gigabit and 10gig just proved too costly, it outpaced complementary technologies like storage, and there wasn't a consumer need.

Also, I just need to set the record straight: PCIe did not progress at such an even pace. The jump from 3.0 to 4.0 took 7 years:
xFigure1.png.pagespeed.ic.Na79TxWD9C.png
If you look at how long it took PCIe to increase 10x, it took about 15 years!! I'll bet DRAM probably works out to something similar.
Yes and no. The first Gigabit ethernet came out in 1998, and it was fast accepted in all markets, including consumer. Yes, that consumer base was a smaller percentage than today, it was still readily available.
The next "upgrade" was 2.5Gb and 5Gb, released in 2016, 18 years later. The most PCIe had was 7. And PCIe was again, readily available to all markets. 2.5Gb and 5Gb were niche products and virtually impossible for a consumer to be able to utilize it (computers, NAS, Router/switches). At least not without spending a small fortune.
10GBase-T was developed and released much sooner. Developed in 2001, released formally in 2002. So, yes, Ethernet did have a bigger jump of 10x vs 2x, but there have been stop gaps in the following years, and 22 years to be able to make it main stream, which is still isn't. Even USB-C has passed it massively.
In 2010, 40Gb was approved as a new standard, and I well imagine was being used in industry/offices. Now I think those are rare as many have gone to fiber. And Fiber won't be an option for consumers for a long time. Not until all IoT starts dual porting devices with ethernet and fiber so people can make the transition.
The whole "ethernet" thing has been a big pet peeve of mine for about a decade now, and it just gets worse every year. I don't believe there is any legitimate reason now to be making 10Gb standard by now. It's 22 years old. Yea, it might require a bit more power, but if they had have invested even the slightest research into that as they have with everything else, they could have overcome that. And, yes, there are dual port 10Gb ethernet cards that are fanless, as well as switches (the one I linked that I'm still hoping to get one day).
 

bit_user

Titan
Ambassador
Yes and no. The first Gigabit ethernet came out in 1998, and it was fast accepted in all markets, including consumer. Yes, that consumer base was a smaller percentage than today, it was still readily available.
You need to compare like-for-like. Because, if you're talking about 10GBase-T, then that was also available in high-end boards since about a decade ago.

If we're talking about mass-market, then Gigabit Ethernet didn't truly become a de facto standard feature of motherboards until the late 2000's.

The next "upgrade" was 2.5Gb and 5Gb, released in 2016, 18 years later.
You're getting the chronology all twisted up. 2.5GBase-T and 5GBase-T only happened to try and clear the logjam around 10GBase-T adoption.

The most PCIe had was 7.
For a mere doubling, though. Not a 10x.

And PCIe was again, readily available to all markets.
Mainstream desktop didn't have PCIe 3.0 until Ivy Bridge, in 2012.

2.5Gb and 5Gb were niche products and virtually impossible for a consumer to be able to utilize it (computers, NAS, Router/switches). At least not without spending a small fortune.
That's not really true, but there was a sort of chicken-and-egg problem of cheap switches not offering 2.5 GBase-T until more PCs had it, and vice versa. We already started to see some enthusiast motherboards integrating 2.5 GBase-T in the late 2010's, but then the pandemic supply chain issues threw a wrench in things and some of the parts in shortest supply ended up being specifically Ethernet ICs.

Back in Feb. 2020, I bought a Netgear managed switch with 4x 1Gb, 2x 2.5 Gb, 2x 5 Gb, and 2x 10 Gb ports for $210. That proves affordable, higher-end networking products were finally starting to reach the SoHo market, before the pandemic.

10GBase-T was developed and released much sooner. Developed in 2001, released formally in 2002.
Nope. That's the danger of trying to argue about something you don't really know. You're talking about 10 Gigabit Ethernet, but that didn't include twisted-pair copper, which is what "10GBase-T" specifically refers to. That wasn't standardized until 2006.

In 2010, 40Gb was approved as a new standard,
Not over twisted-pair copper cables. That only happened in 2016.

The whole "ethernet" thing has been a big pet peeve of mine for about a decade now, and it just gets worse every year.
Then you should've been following it more closely and should know more about it, by now.

I don't believe there is any legitimate reason now to be making 10Gb standard by now.
Are you an electronics engineer? Didn't think so. You're just acting as if technology trends are some kind of natural law, but they aren't.
 

abufrejoval

Reputable
Jun 19, 2020
584
424
5,260
There are a ton of NICs available based on AQC113, but I've seen none in x1 format. Of course the reason why is that no motherboard has PCIe 4.0 x1 slots and it's plausible that implementation costs more than reusing PCBs.
A 'ton' seems exaggerated when I can still only find two, one from OWC and another from DeLOCK.

Yeah, x1 slots are either becoming more rare or hidden in places where you can't use them (e.g. right underneath any GPU that isn't single-slot). And I've seen far too many x1-x4 slots without an open rear end, which I consider a crime!

And we've got Nvidia with their super-wide 4++ slot GPUs to thank for it, with M.2 being the only escape to use those lanes.

Still, I have several x570 boards with a single PCIe v4 x1 slot, while I've never seen one with PCICe x2.
And I honestely wouldn't even mind putting an ACQ113 into a PCIe v3 x1 slot and run it at 5GBase-T: everything better than 1 or 2.5 Gbit is a big bonus!

The cost for an extra PCB variant is ridiculously low, at least once you've got some scale.

The cost for a separate SKU as such is creeping up to ridiculous numbers, but perhaps not quite near the €150 they are asking for these, vs the €100 for the older AQC107.

On a motherboard that AQC113 is more likely below €20 *and* doesn't waste the kind of real-estate an x1 slot does, but vendors will only put them on boards in the €800 range...

At least these days you get M.2 NICs with either the ACQ107 or the ACQ113 on them or you can use an M.2 to PCIe x4 cable to stop you from pulling your hair out!
 

bit_user

Titan
Ambassador
The cost for a separate SKU as such is creeping up to ridiculous numbers, but perhaps not quite near the €150 they are asking for these, vs the €100 for the older AQC107.
Uh, I think you went from talking about motherboards to NICs, in this part. In reference to the AQC113 boards I linked, they're currently selling for $72 - $80, on Amazon. Probably available a bit cheaper on AliExpress.

Funny enough, I got two AQC107 boards, on Amazon, back in 2018 for a Black Friday sale price of $68 each. That was before Marvell acquired Aquantia. At the time, I think the non-sale price was in the high $90ish range.
 

abufrejoval

Reputable
Jun 19, 2020
584
424
5,260
I forgot to mention another point: Every NIC is going to waste at least 1 PCIe lane: there is no way to partition them (yes, not counting switching).

Using one PCI slot or PCIe v1 lane for a Gbit Ethernet seemed straight-forward a decade or two ago.

Conversely keeping a NIC at 1Gbit (or 2.5) when the PCIe lane now run at 5-20 Gbit/s, I consider a crime, because there simply isn't enough lanes around to waste them this badly.

This is so obvious that I cannot help thinking that a lot of non-technical issues are at work behind the curtain.

Intel, RealTek, Marvell: please stop the games and let us have something remotely sane on the mainboard, since we really already need to move to something that matches the bandwidths of the NVMe age.
 

abufrejoval

Reputable
Jun 19, 2020
584
424
5,260
Uh, I think you went from talking about motherboards to NICs, in this part. In reference to the AQC113 boards I linked, they're currently selling for $72 - $80, on Amazon. Probably available a bit cheaper on AliExpress.
I started with PCs when only CPUs and RAM were on the mainboard, and sometimes only the first part of the RAM...
To me they are completely interchangeable, which is no longer true, evidently...

Of course I'd prefer to have 10Gbit NICs onboard, because they should be a lot cheaper, take less space and now make do with a single lane.
Funny enough, I got two AQC107 boards, on Amazon, back in 2018 for a Black Friday sale price of $68 each. That was before Marvell acquired Aquantia. At the time, I think the non-sale price was in the high $90ish range.
10Gbit port prices on switches used to be $1000 with early NICs about the same.

For almost 10 years with the help of Aquantia switch chips, port prices dropped to around $50 on unmanaged entry level switches, with NICs slightly more expensive basically for the SKU overhead. And perhaps that's the reason they are creeping up , even if the AQC107 can still be had for around €99 with VAT in Europe.

I've heard that RealTek Gbit chips are less than a Dollar for mainboard vendors. The Aquantia may be a little more expensive, but I still would guestimate it at below $20: probably less if it sold like hotcakes.

I don't know if it's a chicken & egg issue, too niche or some kind of finger wrestling: there is so much politics in everything and in IT, it's very hard to tell.

I just wish I could just pay the extra $20 and be done with it!

Just to think how much effort goes into RGB LEDs instead!!!
 

bit_user

Titan
Ambassador
I've heard that RealTek Gbit chips are less than a Dollar for mainboard vendors. The Aquantia may be a little more expensive, but I still would guestimate it at below $20: probably less if it sold like hotcakes.

I don't know if it's a chicken & egg issue, too niche or some kind of finger wrestling: there is so much politics in everything and in IT, it's very hard to tell.

I just wish I could just pay the extra $20 and be done with it!
What's the competitive landscape like? The chips for 10GBase-T are made by: Broadcomm, Intel, Marvell (Aquantia), and Realtek. Right? Did I miss any? In server adapters, it seems mostly a 2-horse race, between Intel and Broadcomm, I think.
 
A 'ton' seems exaggerated when I can still only find two, one from OWC and another from DeLOCK.
They're sold in the classic random name imported from China form. It's a somewhat similar situation to the current Intel 2.5Gb controller: you'll find them in motherboards, a small number of expensive NICs and in a bunch of imported no name cards.
Still, I have several x570 boards with a single PCIe v4 x1 slot, while I've never seen one with PCICe x2.
I never realized that a lot of AMD boards had PCIe 4.0 x1 slots. I must have assumed that because Intel boards don't tend to (I've only seen them on boards with a lot of slots) they wouldn't either. (this might change in the future if the ARL chipset leaks are accurate as they list no PCIe 3.0)
And I've seen far too many x1-x4 slots without an open rear end, which I consider a crime!
Yeah I definitely agree anything other than x16 slot shouldn't be closed.
The cost for an extra PCB variant is ridiculously low, at least once you've got some scale.
That's just it though there is no real volume for 10Gbe cards. The majority being sold in NA on the client market is almost guaranteed to be these no name Chinese imports. In the case of those they do whatever maximizes margins which means fewest PCBs. Also when you consider Intel boards don't do PCIe 4.0 x1 aiming at the largest potential market also plays a part.
On a motherboard that AQC113 is more likely below €20 *and* doesn't waste the kind of real-estate an x1 slot does, but vendors will only put them on boards in the €800 range...
The AQC107 was definitely around $40-45 which is basically chipset pricing. I doubt Marvell is interested in taking a hit on margins so I doubt AQC113 is a lot cheaper. I'd bet we're looking at right around $30 or so which in motherboard vendor terms probably means $50 to the customer. That's too large an amount to eat the cost of and in sub $400 boards too much to add.

I definitely think once we're seeing above $400 the point has been reached where they really all should have 10Gbe.
What's the competitive landscape like? The chips for 10GBase-T are made by: Broadcomm, Intel, Marvell (Aquantia), and Realtek. Right? Did I miss any? In server adapters, it seems mostly a 2-horse race, between Intel and Broadcomm, I think.
Intel's last 10Gb controller was released around 5 years ago and it's definitely enterprise and only available in dual/quad port so not really appropriate power consumption for consumer market. It's basically just Broadcom and Marvell that would be appropriate for consumer devices and I'd imagine Broadcom charges as much as they possibly can since that seems to be their corporate mantra.
 
Last edited:

bit_user

Titan
Ambassador
Yeah I definitely agree anything other than x16 slot shouldn't be closed.
You guys seem to forget that there are other components on a motherboard. Sometimes, there's stuff that would interfere with putting a longer card in that slot. I think this is why some boards have slots that are mechanically like x8, but electrically just x4 (or even less).

Another potential reason not to put a big card in a small slot would be the weight of the darn thing, if it were something like a graphics card. A longer slot provides additional mechanical support. That might not be a big deal when your machine is just sitting there, but could make all the difference when a machine is shipped.
 
  • Like
Reactions: thestryker
You guys seem to forget that there are other components on a motherboard. Sometimes, there's stuff that would interfere with putting a longer card in that slot. I think this is why some boards have slots that are mechanically like x8, but electrically just x4 (or even less).
On a consumer board the only thing likely to be in the way is M.2 which can be designed around (I've never seen anything else in the way, but that certainly doesn't mean there isn't).

I think the decrease in PCIe slots in general is probably why Asus and Gigabyte have largely shifted to x16 mechanical slots on boards above budget range. Less RMA potential if you have an x16 slot than an open ended x4.
 
  • Like
Reactions: bit_user

abufrejoval

Reputable
Jun 19, 2020
584
424
5,260
You guys seem to forget that there are other components on a motherboard. Sometimes, there's stuff that would interfere with putting a longer card in that slot. I think this is why some boards have slots that are mechanically like x8, but electrically just x4 (or even less).
Forget, no. Acknowledge that it is an effort, yes. But if they want my money, I'd like some effort in return.

Once they break up a bundle of 4 PCIe lanes e.g. for onboard Ethernet, USB or perhaps a dual lane SATA controller, there is an opportunity and quite often these lanes are simply no longer used.

And I guess it's mostly a size matter with smaller form factors and the issue of super-wide GPU slots eating up so much real-estate.

Until AM4 most of the better boards still had x8+x8 bifurcation with the 2nd slot physically x16 and then usually an x4 slot from the chipset and an x1 from left-over lanes.

With AM5 I see the x8+x8 just gone (until you get to the €800 range) as are the x1 left over ones while the x4 is often a physical x16 but fed from the chipset not the CPU, so no bifurcation, even if at PCIe v5 that would make a ton of sense. Clearly they want those with the need and the know in the higher price bracket. And at that point 10Gbit might just be on-board, so you no longer need those extra slots... until you decide to aim for 25/40/100Gbit.
Another potential reason not to put a big card in a small slot would be the weight of the darn thing, if it were something like a graphics card. A longer slot provides additional mechanical support. That might not be a big deal when your machine is just sitting there, but could make all the difference when a machine is shipped.
There should be no weight on the slot in theory. The original PC design was a flat mainboard, but as soon as they were mounted vertically e.g. on IBM PS/2, cards were supposed to be supported from the top and the back as well as the outer bracket mount, while still being much lighter and single width in general.

I still try to make sure I put as little weight as possible on the slot using back-side supports, because just moving around these workstation beasts on the tiled floor risks killing a €2000 GPU otherwise.

Never shipped or received a fully assembled system (other than notebooks), I wouldn't dare (shipping) or care (receiving). It's always parts or nothing for me since the early 1980s.

But it's fascinating to see how system builders are managing that with crazy foam engineering etc.
 

bit_user

Titan
Ambassador
at that point 10Gbit might just be on-board, so you no longer need those extra slots... until you decide to aim for 25/40/100Gbit.
That's why I spent the extra money to get onboard 10 GigE in the ASRock Rack board for my fileserver. The thing is micro-ATX, meaning it's only got 3 slots. I doubt I'll ever put a dGPU in it, but if I added a 2-slot dGPU, then it would block the PCIe 4.0 x1 slot, leaving only the x8 slot free. Then, if I put a NIC in that slot, it would cut my dGPU down to x8.
Terrible slot layout, really. They should've put the x16 slot at the bottom, though I don't know enough about micro-ATX to say if that would work for all cases. Anyway, that convinced me I really didn't want to have to use a NIC, even though I had a 10 GBase-T card to spare.

Weirdly, they don't offer any X570 chipset boards in regular ATX or EATX form factor - just micro-ATX and mini-ITX.

There should be no weight on the slot in theory. The original PC design was a flat mainboard, but as soon as they were mounted vertically e.g. on IBM PS/2, cards were supposed to be supported from the top and the back as well as the outer bracket mount,
The ATX form factor came along well after tower cases became commonplace, if not even the dominant form factor.

I still try to make sure I put as little weight as possible on the slot using back-side supports, because just moving around these workstation beasts on the tiled floor risks killing a €2000 GPU otherwise.

Never shipped or received a fully assembled system (other than notebooks), I wouldn't dare (shipping) or care (receiving). It's always parts or nothing for me since the early 1980s.

But it's fascinating to see how system builders are managing that with crazy foam engineering etc.
My point wasn't about you, at all! Indeed, it was about anyone selling prebuilt PCs. Whether by mail order or even retail, any situation where the PC isn't be assembled at the location where it's being used means it must be able to withstand some degree of shock that a big card in a small slot might not.
 

mac_angel

Distinguished
Mar 12, 2008
661
136
19,160
@mac_angel , you might find this interesting/useful:

Included are some 10GBase-T switches for as little as $27.50 per port (unmanaged).
I'll check it out, but a big thing depends on where a person is from. I'm in Canada so what's available, prices, cost of shipping (and duty) all come into play.
As far as I've found so far, this one on Amazon is the best and cheapest option. And these on eBay to upgrade my two computers. They are a bit down on my wish list though. I'm on permanent disability, so other things are more important financially. Sadly, they've been on my wish list for a few years now. But at least I was able to put in the CAT8 three years ago so I'm setup in that regards.

edit: I'm still rolling through it, but I did find a version on Amazon.ca that was cheaper than the one I listed. Big thanks!
 
Last edited:
  • Like
Reactions: bit_user