Just look at what's happened to pricing of 10 Gigabit switches, for a datapoint that has almost nothing to do with the PC market. Last I checked (a few months ago), they still cost almost double the pre-pandemic prices.
The same can't be said for Intel and Broadcom. It's not like AQC107 is a drop-in replacement for them, so I think a lot of manufacturers tried to wait it out.
There was a motherboard with 2x 10 Gigabit MACs I was trying to buy that was unavailable for about a year. They even made a variant that swapped out Intel for Broadcom, but availability of that one was little better.
I have a separate datapoint on 10 Gigabit Intel NICs, which has suffered substantial price increases & availability problems.
Geizhals.de confirmed no price hikes on Netgear ProSAFE XS500M during the pandemic, if anything it became
cheaper (have a look at MAX data, which goes back to 2017). Prices fell below €50/port, which was inbelieveable during the decade before.
Same with the AQC107 in the very popular
ASUS XG-C100C incarnation, cheaper during the pandemic than before or after with €80/port: again the Geizhals price data goes back to 2017.
10Gbit Ethernet was strictly considered enterprise and became a battle zone during the first two decades after its specification, it threatened to replace nice fat FC revenues with FCoE and vendors were going for fat offload ASICs that could also do really cool VM and network virtualization things. Of course everybody wanted to sell fiber links and GBICs so 10GBase-T was treated like a paria. It didn't help that it cost nearly 10 Watts per port originally to do the modulations made necessary by a medium that didn't couldn't just physically support higher frequencies.
The result were costly functional monsters with buggy drivers and VMware eventually gave up and put all their overlay network and software switch logic in software, because it only cost 2-3% of CPU overhead, but allowed rapid code changes, which ASICs didn't (today at 100/400Gbit they are reversing that again).
Many of the fabric vendors went bust, Intel survived relatively well with much less offload functionality, as it had never put their faith into really fat ASICs that much. Instead they prefer CPU side offload-engines on edge SoCs, which ASICs vendors obviously could not duplicate.
But with the hyperscalers 10GBit quickly became far too small and merchant silicon took over the switches at x0 and x00 Gbit/s, there was little money to be made at 10Gbit/s. Servers sell with DPUs these days that are 400Gbit/s going on 800.
Aquantia with NBase-T tried to change the game consumerizing >1Gbit Ethernet by reducing the power cost for modulation to around 3 Watts at 10Gbit/s, lower at 5Gbit and almost to 1Gbit/s levels for 2.5Gbit/s. They also offered fully integrated chips for NICs and switches (cascadable 4 port designs), which changed the price game significantly and essentially made 10Gbit "consumer ready"... except that consumers evidently weren't buying, perhaps because the only network they cared about was the broadband connection and East-West traffic never an issue.
Nobody at Microsoft or Intel was really interested in better office LANs, so somehow better than Gbit LAN never made it to the workplace, either.
Intel and and the big ASIC manufacturers refused for ages to do anything NBase-T, because they wanted to sell high revenue. When Intel finally did 2.5Gbit/s, they must have put interns to the job because they did it badly and buggy, while RealTek was, well, RealTek 2.5Gbit no better or worse than 1Gbit.
What Intel is still selling at 10Gbit is mostly chips they designed 15 years ago and might still make at 90nm somewhere. They just can't compete in power efficiency to Aquantia and BroadCom is even worse: they needed either active cooling or server chassis air-flow to survive when i tried them last more than a decade ago and I doubt they redesigned or shrunk their 10Gbit ASICs since then.
They are also mostly dual port chips by design requiring twice the single port PCIe lane allocations, because that's what everybody wants in servers, but doesn't care for in consumer hardware. Not that I've ever had a single Aquantia card fail on me: much lower power consumption evidently helps.
No, they are not drop-in replacements. But at least these days (since they went into the upstream kernel some years ago) they work much better out of the box with just about every Linux I've tried, while 2.5-10Gbit NIC drivers from Intel have become a true nightmare with hundreds of device IDs for dozens of chipsets and revisions and at least a handful of distinct drivers.
Aquantias support nothing fancy in terms of offloading, iSCSI or FCoE, they are basically 1Gbit technology scaled to NBase-T: exactly what I want for my home lab and now only on a 1x PCIe v4 slot (or a 20Gbit USB port).