Best Holiday CPU deals: Save on AMD and Intel

Back in October, Newegg had the lowest-ever price on the i5-12600, for $215. Then, it jumped back up to $240, before I had noticed the dip. I'm holding out to see if it matches or beats that all-time low, on Friday or Monday.

Why that model? Well, it's non-hybrid, so Win 10 should have no issues on it. More importantly, it's more efficient than the bigger dies, and I'm trying to build a box that won't heat up my home office, over the summer. So, I'm aiming for low idle power and 65 W TDP. I'm planning on using just its iGPU for graphics.
 
  • Like
Reactions: Co BIY and Order 66
There are no deals.
I was considering building a new PC for my son. I last built a PC for myself 16 months ago. Building the exact same PC will now cost MORE even though everything is much older. There are no deals. Solid state hasn't changed in price. Spinning rust has actually gone up. GPUs have gone UP? Seriously, this site is still recommending GPUs from before Covid. AMD chips haven't changed price and only the overly hot Intel mid-range CPUs have dropped. Has the entire industry stagnated?
 
There are no deals.
I think the best deals were either during "October Prime Day" or Cyber Monday. Prices of the few things I was tracking have mostly gone back to normal, now. Most deals you find would probably be on lower-demand items that the retail/e-tail channel are trying to clear out.

I last built a PC for myself 16 months ago. Building the exact same PC will now cost MORE even though everything is much older. There are no deals. Solid state hasn't changed in price. Spinning rust has actually gone up. GPUs have gone UP?
This timing is the key thing, though. 16 months ago was back when the industry was still in the post-pandemic hangover. That artificially depressed prices below what they would normally be, especially in storage. GPUs were also suffering from a crypto crash, around that time.

AMD chips haven't changed price and only the overly hot Intel mid-range CPUs have dropped. Has the entire industry stagnated?
That's an interesting question. While price reductions on legacy nodes have seemed to continue, I think there's a longer overlap before transistors on newer process nodes achieve price parity with the older ones. We see this mostly with GPUs, but it should also be apparent in DRAM and (to a lesser extent, due to its 3D structure) NAND.

Add to that, the AI-driven demand for the newer nodes, and that should either keep CPUs and GPUs on older nodes for longer, or will make them even more expensive on newer ones.

I think it's very telling that Nvidia's Blackwell is staying on a N4-family node. Intel's Battlemage is staying on N6, and I'd guess AMD's RX 8000 will probably be on N4P? It's not for lack of progress on TSMC's part, I guess it's just not economical to bring these large dies to the consumer market on newer nodes.
 
Eyeing an open-box 14700K myself. Very tempting, but I would struggle to justify it if Bartlett lake actually comes out, and buying three processors for the two LGA1700 boards I have just isn't practical. Not that I would replace my 12100 anytime soon, it really gets the job done as a living room PC.
 
434 dollars for a 9950X is a pretty insane deal... Must! Hold! Out! 12 core single CCD 10800X3D probably going to be 450-500 and faster for my tasks...
I think single vs. dual CCD is generally a bit overblown, but maybe for gaming...

I'm betting Zen 6, itself, isn't going to be that much faster. 13% is the figure I'm going with. Not bad, but certainly not enough for a single-generation upgrade.

It's possible you could get more L3 cache, even with the 9900X. When AMD moves to 12-core CCDs, I'll bet they bump L3 cache only to 48 MB per die, not 64, as the 12-core 9900X has.
 
I think single vs. dual CCD is generally a bit overblown, but maybe for gaming...

I'm betting Zen 6, itself, isn't going to be that much faster. 13% is the figure I'm going with. Not bad, but certainly not enough for a single-generation upgrade.

It's possible you could get more L3 cache, even with the 9900X. When AMD moves to 12-core CCDs, I'll bet they bump L3 cache only to 48 MB per die, not 64, as the 12-core 9900X has.
Clock speeds are rumored to be very high, combine that with the node improvement and it could be 15-25% faster than the 9900X3D.
 
Eyeing an open-box 14700K myself.
I'd be leery about buying any of the 253W RPL/R parts that had already been used unless it for sure had the warranty intact. I'm contemplating a 14700K as well and if they get under $300 (new) I might get one rather than waiting for BTL. If your primary use case is lighter threaded/gaming I think waiting to see where the BTL SKUs get released and what their prices look like would be good since any of them without E-cores should be a brand new die.
 
  • Like
Reactions: bit_user
Clock speeds are rumored to be very high, combine that with the node improvement and it could be 15-25% faster than the 9900X3D.
Er... I think you mean combining clock speeds with IPC improvement. Node improvement is what primarily yields the clock speed increase.

I'll bet you 100 "internet points" that it's not more than 15% faster per-core.
 
  • Like
Reactions: helper800
Er... I think you mean combining clock speeds with IPC improvement. Node improvement is what primarily yields the clock speed increase.

I'll bet you 100 "internet points" that it's not more than 15% faster per-core.
To my understanding, node improvement directly affects the ability to create a better architecture, and it's the architecture improvement that increases IPC. The node shrink also directly leads to increased clock speeds because smaller transistors that are closer together can switch faster. These smaller transistors also allow for more advanced power delivery mechanisms, which can contribute to clock speeds. More advanced packaging techniques allow for similar features in a smaller area, further increasing density, which contributes to clock speeds. Every once in a while, a different kind of transistor, either in material, format, or both, comes out that also drastically changes some of the above dynamics. Correct me if I am wrong, as this isn't an area I'm particularly knowledgeable in.
 
Last edited:
The node shrink also directly leads to increased clock speeds because smaller transistors that are closer together can switch faster.
No transistors switch "inside of themselves" they just switch states for themselves and not between each other.
But the more important part is that denser=more heat in the same space, and higher clocks means more energy needed in the same amount of time (each switch takes energy) so more energy (heat) per time plus more heat per space.

So don't expect a huge increase in clocks unless we also get a huge increase in power draw but with ryzen being pretty temp sensitive and needing lower temps to work even that would be difficult to pull off.
 
No transistors switch "inside of themselves" they just switch states for themselves and not between each other.
I never said the above quotation. Smaller transistors do indeed switch faster, that is a fact... I never said anything about them switching things other than themselves... Smaller transistors have lower capacitance and resistance. Lower capacitance means that they require lower voltages to run at any given speed, and lower resistance means lower latency across a smaller package because of the node shrink, less heat output as waste, and higher current for faster switching.
 
Last edited:
  • Like
Reactions: bit_user
To my understanding, node improvement directly affects the ability to create a better architecture, and it's the architecture improvement that increases IPC.
Node generally has nothing to do with architecture unless there are specific design features being put into play (BSPDN for example). The biggest impact node has on design is typically from a financial aspect in terms of die per wafer (a smaller node means you can design a larger chip with less penalty).
The node shrink also directly leads to increased clock speeds because smaller transistors that are closer together can switch faster.
Nope it can, but there's no guarantee. Architecture and node both play a part in clock speeds, but a node being smaller doesn't guarantee you're actually getting higher clocks: see Intel 7 vs... everything else. ADL to RPL is also a great example of how architecture can affect clock speeds more than a node change.

Concentration of heat is also becoming a very big issue with smaller client chips which may have something to do with stagnating clock speeds.
 
  • Like
Reactions: helper800
To my understanding, node improvement directly affects the ability to create a better architecture,
It's an enabler, for sure. You can do a bigger core in an older node, but if you go too complex for the density supported by the manfucaturing node, you have to add more latency-inducing pipeline stages or reduce peak clock speeds. The latter is basically the Apple playbook. Since their cores are mobile-oriented, they're not designed to hit high clockspeeds. Instead, they focus on more complex designs that more than make up in IPC what they lack in high frequencies.

The node shrink also directly leads to increased clock speeds because smaller transistors that are closer together can switch faster.
Yeah, also requiring less power to do so, which adds more headroom for cranking up frequencies. This is why you tend to see the formulation of: "X performance at same power; Y power at same performance", when someone like TSMC introduces a new node. In this analysis, the constant is the microarchitecture, which is usually something like an old ARM core design that they fab on both nodes for comparison purposes.

These smaller transistors also allow for more advanced power delivery mechanisms,
Power delivery isn't directly related. What Intel's "Backside Power" feature of their 18A does is to improve logic density by eliminating some power distribution wires from that plane. This factors into their claims about overall density improvements for the node. So, if you had two nodes that were exactly the same, other than backside power delivery, Intel would say the latter is higher-density, because they're counting overall logic density.

More advanced packaging techniques allow for similar features in a smaller area,
No, that's just about enabling designs to have multiple chiplets and layers. Chiplets are mostly about achieving better economies of scale than directly benefiting performance. So far, the only case where you can directly attain better performance via advanced packaging is through die-stacking, such as RAM-on-logic.

Every once in a while, a different kind of transistor, either in material, format, or both, comes out that also drastically changes some of the above dynamics.
New materials and transistor designs are an essential part of continuing to shrink down to smaller scales.
 
  • Like
Reactions: helper800
Node generally has nothing to do with architecture unless there are specific design features being put into play (BSPDN for example). The biggest impact node has on design is typically from a financial aspect in terms of die per wafer (a smaller node means you can design a larger chip with less penalty).

Nope it can, but there's no guarantee. Architecture and node both play a part in clock speeds, but a node being smaller doesn't guarantee you're actually getting higher clocks: see Intel 7 vs... everything else. ADL to RPL is also a great example of how architecture can affect clock speeds more than a node change.

Concentration of heat is also becoming a very big issue with smaller client chips which may have something to do with stagnating clock speeds.
The node you are designing the chips on is outcome determinative of the architecture.

So, "Yes it can," not "Nope, it can?" I was not talking about guarantees or edge cases. The examples you brought up massively change architextures. Keeping the architecture the same, but on bigger vs smaller nodes, the smaller nodes will clock higher at the same power, or clock the same at lower power. I believe this comes down to reduced RC delay. I never said that architecture does not affect clock speeds. Smaller transistors that are closer together have less resistance and capacitance. Lowering resistance means lower voltage requirements' while also reducing waste heat per transistor. You are correct that the concentration of heat goes up, but the per transistor waste heat goes down.
 
  • Like
Reactions: bit_user
ADL to RPL is also a great example of how architecture can affect clock speeds more than a node change.
You've got that backwards. The microarchitecture is identical. The node changes enabled voltage/frequency curve improvements, which then enabled higher peak frequencies and better frequency scaling of multi-threaded workloads.

Besides that, they added E-cores, enlarged L2 caches, and tweaked the ring bus. But, if we're just looking at the core, then it's really a story of how they were able to improve on Intel 7.
 
  • Like
Reactions: helper800
The node you are designing the chips on is outcome determinative of the architecture.
Yes, mostly. One of the more interesting and recent case studies is Rocket Lake, where Intel backported the cores of Ice Lake from their 10 nm to their 14 nm node. The result was that it ran too hot. Also, the size of the cores meant that Intel had to walk back from 10 cores per die to just 8. Combined with poor frequency scaling, that meant Rocket Lake i9 actually performed worse on some multi-threaded workloads than a Comet Lake i9.

The properties of a node are very much in the minds of CPU designers, as they weigh all the factors they need to balance. Furthermore, Intel has famously co-optimized their microarchitecture and manufacturing nodes, until recently.
 
  • Like
Reactions: helper800
You've got that backwards. The microarchitecture is identical. The node changes enabled voltage/frequency curve improvements, which then enabled higher peak frequencies and better frequency scaling of multi-threaded workloads.

Besides that, they added E-cores, enlarged L2 caches, and tweaked the ring bus. But, if we're just looking at the core, then it's really a story of how they were able to improve on Intel 7.
I've seen zero evidence indicating they changed the node between ADL/RPL. If you've got some please share.

The cache redesign, which is absolutely part of the architecture, is actually what allowed for the higher peak frequencies.
 
I've seen zero evidence indicating they changed the node between ADL/RPL. If you've got some please share.

The cache redesign, which is absolutely part of the architecture, is actually what allowed for the higher peak frequencies.
How does lower latency memory access increase clock speeds? All this does is optimize the usefulness of the clocks that already exist. This could increase performance, but not clock speed.
 

TRENDING THREADS