News Intel's next-gen Nova Lake CPUs will seemingly use a new LGA1954 socket

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I believe it's way more than you're thinking. Also, this forum has proven it time and time again.

I myself have through the entire Ryzen CPU line using only 3 different motherboards. I actually didn't have to change one of them, but I did just because. We're talking 5 different CPUs. I plan on dropping in a 9800X3D onto my B650e motherboard. This will hold me over until until the next generation. It'll be awesome to only need 3 motherboards to go a decade and 5 different CPU architectures. IMO
Sounds like either a shopping problem like my mom who has some hoarding issues, or a long nightmare of unacceptable pc issues. 12700k to 13900kf was enough to show me that the upgrading money was better spent elsewhere. Before the 12700k I was on a 5775c and that wasn't that big of an upgrade in real life use. Just a few problematic games (that list has grown since, just in 2021 it was just a few).
 
Well, I can't speak for the average user, but you would be surprised how many enthusiasts actually use all the lanes in their motherboards for I/O.

Want 10Gbit NIC? Want USB4 or at least 10Gbps ports? Want RAID5? Want a small homebew NAS? Want more than 4 NVMe ports with add-in cards? And so on and so forth.

In my case, I have 3 NVMes and 3 sATA drives and I ran out of "normal" lanes, so I sacrificed the X16 to X8 and the lane splitting is real.

It's not overblown or a stretch to say AMD needs to pick up the slack for enthusiasts. Intel is not much better. The jump from the regular consumer platform to the workstation is too damn much and they need to close the gap.

Regards.
I want to preface this by saying I absolutely want client systems to have more lanes or at least a PCIe 5.0 x8 connection to the chipset.

The only thing that's actually going to max out an Intel chipset is 100Gb networking + PCIe 4.0 NVMe or transferring locally between multiple PCIe 4.0 NVMe drives. With AMD the bar is lower due to the link being PCIe 4.0 x4.
It's not overblown or a stretch to say AMD needs to pick up the slack for enthusiasts. Intel is not much better.
Intel is actually significantly better than AMD when it comes to connectivity. I assume it's likely due to the DMI lanes on the W/Z/H chipsets being PCIe 4.0 x8.

Examples:
W680 board: 2x PCIe 5.0 (16 lanes split), 2x PCIe 3.0 (4 lanes each), 3x M.2 NVMe, 4x SATA, 1x SlimSAS (can be either 4x SATA or 1x PCIe 4.0 x4)

W880 board: 2x PCIe 5.0 (16 lanes split), 1x PCIe 4.0 x4, 4x M.2 NVMe, 4x SATA, 1x SlimSAS (can be either 4x SATA or 1x PCIe 4.0 x4)
 
Sounds like either a shopping problem like my mom who has some hoarding issues, or a long nightmare of unacceptable pc issues. 12700k to 13900kf was enough to show me that the upgrading money was better spent elsewhere. Before the 12700k I was on a 5775c and that wasn't that big of an upgrade in real life use. Just a few problematic games (that list has grown since, just in 2021 it was just a few).
No, not really. I enjoy tech and seeing what advances each generation brings. Some upgrades weren't that noticeable while some were night and day. I was able to sell my old parts, or use them to build a nice PC for friends for cheap.

Sounds like you have a problem with Intel not giving enough if a performance uplift per CPU generation. Not to mention the fact of the constant need for a new motherboard.
 
No, not really. I enjoy tech and seeing what advances each generation brings. Some upgrades weren't that noticeable while some were night and day. I was able to sell my old parts, or use them to build a nice PC for friends for cheap.

Sounds like you have a problem with Intel not giving enough if a performance uplift per CPU generation. Not to mention the fact of the constant need for a new motherboard.
I haven't had any night and day level upgrades since my 4770k over an Athlon 5600+. Compare the 4770k to the 13900kf. I think that is a bigger jump than the 1800x to the 9950X3D on paper, but in real life both offer a decent experience. You might be exaggerating. But I may have to upgrade my daughter's 4770k due to Windows support reasons, which she will fuss about because she is still happy with it. (5775c also was more of a sidegrade over the 4770k, which I apparently forgot begore getting the 13900kf).

I think needing to replace the mobo helps some avoid wasting money on sidegrades. One can sell the mobo just like the CPU, but it just sems like a bigger deal, even if the price is generally less.
 
For me, as CPU upgrade is never a side-grade, I went from a 3700x to a 5800x, when I ran my AMD rig, and had a massive improvement, due to WoW being so CPU dependent. A friend of mine went from a 3600 to a 5600 and had the same experience as he also plays it. We are a bit of an edge case though. I was going to get a 5800x3d, but ended up winning my 12700k rig in one of jay's giveaways.
 
Just curious but what's the real world use case of using these 6 drives at full speed all at the same time?
Video and Audio editing for games and other personal things. Plus, games. I don't like deleting games, so my Steam library is over 2TB, including mods for games and such.

I'm always trying to downsize in terms of amount of disks because of the limitations, but I can't depend on single disks holding 8TB+ of data without a backup somewhere.

EDIT: I forgot to add that I also use most of my USBs actively as well when in VR. I have the Index with 3 trackers; that's 4 USB3 connectors which use their full bandwidth, plus all other peripherals I have, which makes it so I need, at any given time, 7 USB ports available: Index, 3 Vive Trackers, Orbweaver, Mouse and Keyboard. Then whatever else I may need to use.

Like I said, I can't speak about the "average" user, but I know I need all the I/O AMD/Intel is willing to give me.

Regards.
 
Last edited:
Video and Audio editing for games and other personal things. Plus, games. I don't like deleting games, so my Steam library is over 2TB, including mods for games and such.
So what, you download 5 games from the nas to your PC(s) at the same time while also doing 4k video editing on a super large file?!
I don't get why you need all of the lanes to run at full at the same time.
Lane speed only gets impacted if there is actual traffic on the lanes
 
Video and Audio editing for games and other personal things. Plus, games. I don't like deleting games, so my Steam library is over 2TB, including mods for games and such.

I'm always trying to downsize in terms of amount of disks because of the limitations, but I can't depend on single disks holding 8TB+ of data without a backup somewhere.

Regards.
I think TerryLaze's point was that you aren't using up the PCIe bandwidth, just the slots.

I have an even more messy hard drive setup with lots of old bits: PCIe 4x3 optane os drive, 4x4 nvme(on CPU lanes), 4x3 nvme optane U2 over m.2, sata ssd, sata hdd with each sata cached by its own old optane 118GB 2x3 nvme and the os nvme cached by 24GB ram. Using Primocache. The most chipset bandwidth I can reasonably use on hard drives at once is full use of a game drive (4 lanes gen 3 max) and full use of os drive (4 lanes gen 3), which is a single 4 lanes of gen 4 bandwidth.
Mind you my CPU will be working harder because the highest latency(easiest on the CPU) drive setup I have is my Hynix P41, so my system feels a bit snappier, not a lot just a bit, than average but it doesn't use much of the 8 gen 4 DMI lanes my Z790 chipset has.

But If I had both my os and game drive as 4x4 and on chipset lanes, then my 8x4 DMI could theoretically get saturated since I also have a soundcard on the chipset lanes. It could also get saturated if I made a transfer from one 4x4 ssd to another 4x4 ssd if those and my os drive were on the chipset.
 
How often do you plan on maxing out every single ssd in a NAS all at the same time?!?! (While also running max transfer to and from the GPU and the CPU)
It doesn't matter if there are not enough PCIe lanes if you never use all of them at the same time.
see the post from Fran #124... he understands.... do you know any entry level boards with built in 10g nic ??? i sure dont... a hardware raid card.. is almost a must..

a comp i have.. STILL is on x99, that comp has a raid card, sound card, capture card and a video card... as well as 2 nvme and 3 or 4 sata.... i need the lanes for that comp...
 
W680 board: 2x PCIe 5.0 (16 lanes split), 2x PCIe 3.0 (4 lanes each), 3x M.2 NVMe, 4x SATA, 1x SlimSAS (can be either 4x SATA or 1x PCIe 4.0 x4)

W880 board: 2x PCIe 5.0 (16 lanes split), 1x PCIe 4.0 x4, 4x M.2 NVMe, 4x SATA, 1x SlimSAS (can be either 4x SATA or 1x PCIe 4.0 x4)
um.. those look like workstation board based on the w680 and w880 references... im referring to consumer boards, ones that the general public will buy...
So what, you download 5 games from the nas to your PC
do you know any games that can be installed and played off a nas ? i sure dont...
 
um.. those look like workstation board based on the w680 and w880 references... im referring to consumer boards, ones that the general public will buy...
When it comes to connectivity W680 = Z690 and W880 = Z890 I only picked workstation boards because I knew what their connectivity was. The point being any board based on Intel's Z/W/H 600 series or newer has that kind of connectivity capability it's just a matter of looking. I don't know that it will be true for LGA 1851, but if you didn't do overclocking Asus' LGA 1700 workstation board was the best price vs features of the generation (base model was ~$340).

The board in my primary system:
2x PCIe 5.0 (16 lanes split), 1x PCIe 4.0 x4, 4x M.2 NVMe, 1x M.2 NVMe/SATA, 4x SATA

Lower cost Z890:
1x PCIe 5.0 (16 lanes), 2x PCIe 4.0 x4, 4x M.2 NVMe, 4x SATA

Asus also sells a Z890 workstation board with better VRM than the W880, but identical connectivity so:
2x PCIe 5.0 (16 lanes split), 1x PCIe 4.0 x4, 4x M.2 NVMe, 4x SATA, 1x SlimSAS (can be either 4x SATA or 1x PCIe 4.0 x4)
 
  • Like
Reactions: bit_user
I have here a h670 mother board. with a 4060, u3 nvme on pci-e 4x (chipset) a sound card and have 3 nvme ssd's today motherboards with more slots are a premium jun k (this motherbord need to last more five years)
 
I can't imagine going forward only having access to 4 sata ports on mobos. Of course cards exist but aren't quite as fast, right, and when we're talking about moving or writing data that time could really add up quickly I bet on the large storage volumes like HDD's provide on the sata interface.
Well, you can get PCIe cards with more SATA ports on them.

If you run out of room or lanes for those, you can use M.2 cards, instead.
 
As I recall, in Haswell's case the small IPC gains were also because power efficiency gains were prioritized above IPC gains. Intel was trying to get Haswell into fanless tablets.
In Haswell, Intel seemed to put most of their focus into AVX2 and TME/HLE. I don't know if that's a good excuse not to deliver better IPC gains, but I guess AVX2 sure benefits IPC on anything that can take advantage of it.

Anyway, if you want to improve efficiency, a good way to do it is by improving IPC and either keeping clocks the same or even dropping them. Probably the best example of that was Core 2.
 
Well, I can't speak for the average user, but you would be surprised how many enthusiasts actually use all the lanes in their motherboards for I/O.

Want 10Gbit NIC? Want USB4 or at least 10Gbps ports? Want RAID5? Want a small homebew NAS? Want more than 4 NVMe ports with add-in cards? And so on and so forth.

In my case, I have 3 NVMes and 3 sATA drives and I ran out of "normal" lanes, so I sacrificed the X16 to X8 and the lane splitting is real.
In 2012, I built an Intel workstation with a CPU that had PCIe 3.0 x40 connectivity. Initially, I just had a GPU. Then, I added a NVMe SSD to a x4 slot. Finally, I found a good deal on a PCIe 2.0 x8 10 gigabit Ethernet card. So, in the end, I was using 28 lanes. Also had a legacy PCI capture card plugged in, at one point.

So, my extra lanes did serve me well. Had I gotten a consumer CPU, I wouldn't have been able to add that network card, without halving my GPU connectivity, as you mentioned.
 
  • Like
Reactions: -Fran-
If I had both my os and game drive as 4x4 and on chipset lanes, then my 8x4 DMI could theoretically get saturated since I also have a soundcard on the chipset lanes. It could also get saturated if I made a transfer from one 4x4 ssd to another 4x4 ssd if those and my os drive were on the chipset.
None of those scenarios should saturate your DMI link. First, consider that PCIe is bidirectional. Data can simultaneously flow in both directions on each lane. Second, audio data is relatively low-bandwidth and will be streaming from DRAM to your sound card, yet the direction of greatest and most likely data flow from your OS drive & game drive are from the drive to DRAM.

In the second scenario, if you copied from one SSD to another, the data would go from the source drive to DRAM, then from DRAM to the destination drive. Even if they're both connected via DMI, you're only occupying 4 lanes in one direction and 4 lanes in another. DRAM is what's being hit twice (and it's unidirectional), but it has enough bandwidth to cope with that amount of traffic.
 
None of those scenarios should saturate your DMI link. First, consider that PCIe is bidirectional. Data can simultaneously flow in both directions on each lane. Second, audio data is relatively low-bandwidth and will be streaming from DRAM to your sound card, yet the direction of greatest and most likely data flow from your OS drive & game drive are from the drive to DRAM.

In the second scenario, if you copied from one SSD to another, the data would go from the source drive to DRAM, then from DRAM to the destination drive. Even if they're both connected via DMI, you're only occupying 4 lanes in one direction and 4 lanes in another. DRAM is what's being hit twice (and it's unidirectional), but it has enough bandwidth to cope with that amount of traffic.
I was talking purely theoretical though. One needs goldilocks conditions to hit peak bandwidth on an SSD.
 
I was talking purely theoretical though. One needs goldilocks conditions to hit peak bandwidth on an SSD.
Even then, your math only works if the PCIe were unidirectional, which it's not.

You'd need to find a use case where data from more than x8 lanes is all trying to flow in the same direction, at the same time. Basically, like trying to read from a RAID across 3 or more SSDs.
 
No, it's delicate balance of how much IPC they can deliver at a certain clock speed, on a certain process node, at a certain area. Increasing IPC comes at the expense of die area and clock speed. It's not just a dial you can turn up or down, independent of anything else.

It also takes time to increase the sophistication of their designs, which mostly build on what they did in the previous generation. Realistically, they can only change a certain amount between one generation and the next, which goes towards explaining some of the improvements in Zen 3, which was made on mostly the same node as Zen 2 and has only like 10% more die area.

Engineering is a very incremental exercise, which a lot of people might not fully appreciate. You can do all the modeling and estimation you want of a chip, but at some point you just have to build the thing. Then, take detailed measurements, do thorough analysis, and figure out what worked and what didn't, so you can decide what to build on or scrap, in future generations.

Time is also a factor. Not only do they need to do all of the modeling and design, but still leave time for testing, debugging, and a couple respins. And they do a lot of testing, since chip bugs can be so costly. At one point in time, I think the industry average was 2 test engineers for every 1 design engineer.
Then where do we get these leaks every gen a few months before every release for gen after gen on AMD CPU's it almost like clockwork now ..

Someone is either leaking BS constantly or AMD are working 2 gens infront of what they are releasing because it makes no sense to hype 40% uplift only to disappoint ..
The reason they release the "Super" and other up-spec'd editions is because manufacturing yields improve over time, giving them margins to unlock more of the chip and push clock speeds a bit further
I call BS on that one Nvidia know they have market share and mind share they can sell the rubbish to idiots and they will come crawling back for more when Nvidia supersede that GPU months later with little gains in it ..

Its pure money making nothing more if they do what you say they do why doesnt AMD do the same thing ..

there was a 6950xt basically at the end of the lifespan of the 6000series and there was no 7950xtx at the end of its life span ..

This is Nvidia at its scummy best !!
Because the X3D die adds cost and there are some use cases where it adds basically no performance.

Another reason is that it takes more time to do the X3D CPUs. If they ship the non-X3D version first, they have time to perfect the base CPU. Then, they have a stable platform to use for optimizing the X3D version
X3d doesnt always add more performance i agree !!

But its the simple logic of why not kill two birds with 1 stone .

why would i spend lets say 300usd on a non x3d that is only good in some games when i can have a 9800x3d is better in all games ..

Unless of course i was pairing a 5090 with a 9600x and 1080p and 1440p was never used .. ( the is there a bottle neck ?? )

Same with 9950x3d given the single stacking cache ccd is a bummer but i here they are changing that next gen want the best of both worlds buy a x3d over a 9950x every day of the week ..

As for cost at this point the ones that complain about every rising costs this is caused by stupid governments and simple high demand for the millions complaining there are billions paying what ever companies ask !!
 
Then where do we get these leaks every gen a few months before every release for gen after gen on AMD CPU's it almost like clockwork now ..

Someone is either leaking BS constantly or AMD are working 2 gens infront of what they are releasing because it makes no sense to hype 40% uplift only to disappoint ..
AMD and Intel are both working 2-3 gens ahead, however that's not where the bad info is from. I think the bad info is due to influencers who have learned that such outlandish claims get them more views. Don't reward them. When there's someone who is consistently feeding you bad information, the best thing you can do is stop following them and viewing their content.

I call BS on that one
It's a fact that yields and ASIC quality both increase over time. It's logical to conclude that these late-cycle refreshes, with more enabled units and higher clocks, are made possible by both factors. You can believe whatever you like, but the facts align with what I'm saying. Neither of us has definitive proof.

if they do what you say they do why doesnt AMD do the same thing ..
AMD did that with the RX 6x50 refresh, which included not only the RX 6950 XT, but also the RX 6650 XT and RX 6750XT.

there was no 7950xtx at the end of its life span ..
I'm not exactly sure. The RDNA3 series didn't sell terribly well, though. Maybe they thought there wouldn't be enough demand to justify doing it.

Another factor might've been lack of GDDR6 above 20 Gbps. If you look at the RX 6x50 refresh, they didn't just boost core clocks, but also memory clocks. If all you're going to do is just boost core clocks, then I guess it's not really any different that buying a factory-overclocked model.
 
  • Like
Reactions: thestryker
In Haswell, Intel seemed to put most of their focus into AVX2 and TME/HLE. I don't know if that's a good excuse not to deliver better IPC gains, but I guess AVX2 sure benefits IPC on anything that can take advantage of it.

Anyway, if you want to improve efficiency, a good way to do it is by improving IPC and either keeping clocks the same or even dropping them. Probably the best example of that was Core 2.
Haswell had huge improvements all over the design compared to Ivy Bridge. And the clock speed on desktop was the same as Ivy Bridge. The problem was the improvements went to mobile. Intel would target the core design for 35-45W laptops and raise power levels to take the same design to desktop. But for Haswell the core target was 20W so it could scale down to 8W tablets. 20W laptops got a CPU which clocked a lot higher and had slightly better IPC. Desktops got slightly better IPC.
https://www.anandtech.com/show/6355/intels-haswell-architecture/2