I think the best deals were either during "October Prime Day" or Cyber Monday. Prices of the few things I was tracking have mostly gone back to normal, now. Most deals you find would probably be on lower-demand items that the retail/e-tail channel are trying to clear out.There are no deals.
This timing is the key thing, though. 16 months ago was back when the industry was still in the post-pandemic hangover. That artificially depressed prices below what they would normally be, especially in storage. GPUs were also suffering from a crypto crash, around that time.I last built a PC for myself 16 months ago. Building the exact same PC will now cost MORE even though everything is much older. There are no deals. Solid state hasn't changed in price. Spinning rust has actually gone up. GPUs have gone UP?
That's an interesting question. While price reductions on legacy nodes have seemed to continue, I think there's a longer overlap before transistors on newer process nodes achieve price parity with the older ones. We see this mostly with GPUs, but it should also be apparent in DRAM and (to a lesser extent, due to its 3D structure) NAND.AMD chips haven't changed price and only the overly hot Intel mid-range CPUs have dropped. Has the entire industry stagnated?
Then I'd wait, but that's just me. I doubt this will be the last time you find such a deal on an open box i7-14700K, but I'm not about to wager anything on it.Eyeing an open-box 14700K myself. Very tempting, but I would struggle to justify it if Bartlett lake actually comes out,
I think single vs. dual CCD is generally a bit overblown, but maybe for gaming...434 dollars for a 9950X is a pretty insane deal... Must! Hold! Out! 12 core single CCD 10800X3D probably going to be 450-500 and faster for my tasks...
Clock speeds are rumored to be very high, combine that with the node improvement and it could be 15-25% faster than the 9900X3D.I think single vs. dual CCD is generally a bit overblown, but maybe for gaming...
I'm betting Zen 6, itself, isn't going to be that much faster. 13% is the figure I'm going with. Not bad, but certainly not enough for a single-generation upgrade.
It's possible you could get more L3 cache, even with the 9900X. When AMD moves to 12-core CCDs, I'll bet they bump L3 cache only to 48 MB per die, not 64, as the 12-core 9900X has.
I'd be leery about buying any of the 253W RPL/R parts that had already been used unless it for sure had the warranty intact. I'm contemplating a 14700K as well and if they get under $300 (new) I might get one rather than waiting for BTL. If your primary use case is lighter threaded/gaming I think waiting to see where the BTL SKUs get released and what their prices look like would be good since any of them without E-cores should be a brand new die.Eyeing an open-box 14700K myself.
Er... I think you mean combining clock speeds with IPC improvement. Node improvement is what primarily yields the clock speed increase.Clock speeds are rumored to be very high, combine that with the node improvement and it could be 15-25% faster than the 9900X3D.
To my understanding, node improvement directly affects the ability to create a better architecture, and it's the architecture improvement that increases IPC. The node shrink also directly leads to increased clock speeds because smaller transistors that are closer together can switch faster. These smaller transistors also allow for more advanced power delivery mechanisms, which can contribute to clock speeds. More advanced packaging techniques allow for similar features in a smaller area, further increasing density, which contributes to clock speeds. Every once in a while, a different kind of transistor, either in material, format, or both, comes out that also drastically changes some of the above dynamics. Correct me if I am wrong, as this isn't an area I'm particularly knowledgeable in.Er... I think you mean combining clock speeds with IPC improvement. Node improvement is what primarily yields the clock speed increase.
I'll bet you 100 "internet points" that it's not more than 15% faster per-core.
No transistors switch "inside of themselves" they just switch states for themselves and not between each other.The node shrink also directly leads to increased clock speeds because smaller transistors that are closer together can switch faster.
I never said the above quotation. Smaller transistors do indeed switch faster, that is a fact... I never said anything about them switching things other than themselves... Smaller transistors have lower capacitance and resistance. Lower capacitance means that they require lower voltages to run at any given speed, and lower resistance means lower latency across a smaller package because of the node shrink, less heat output as waste, and higher current for faster switching.No transistors switch "inside of themselves" they just switch states for themselves and not between each other.
Node generally has nothing to do with architecture unless there are specific design features being put into play (BSPDN for example). The biggest impact node has on design is typically from a financial aspect in terms of die per wafer (a smaller node means you can design a larger chip with less penalty).To my understanding, node improvement directly affects the ability to create a better architecture, and it's the architecture improvement that increases IPC.
Nope it can, but there's no guarantee. Architecture and node both play a part in clock speeds, but a node being smaller doesn't guarantee you're actually getting higher clocks: see Intel 7 vs... everything else. ADL to RPL is also a great example of how architecture can affect clock speeds more than a node change.The node shrink also directly leads to increased clock speeds because smaller transistors that are closer together can switch faster.
It's an enabler, for sure. You can do a bigger core in an older node, but if you go too complex for the density supported by the manfucaturing node, you have to add more latency-inducing pipeline stages or reduce peak clock speeds. The latter is basically the Apple playbook. Since their cores are mobile-oriented, they're not designed to hit high clockspeeds. Instead, they focus on more complex designs that more than make up in IPC what they lack in high frequencies.To my understanding, node improvement directly affects the ability to create a better architecture,
Yeah, also requiring less power to do so, which adds more headroom for cranking up frequencies. This is why you tend to see the formulation of: "X performance at same power; Y power at same performance", when someone like TSMC introduces a new node. In this analysis, the constant is the microarchitecture, which is usually something like an old ARM core design that they fab on both nodes for comparison purposes.The node shrink also directly leads to increased clock speeds because smaller transistors that are closer together can switch faster.
Power delivery isn't directly related. What Intel's "Backside Power" feature of their 18A does is to improve logic density by eliminating some power distribution wires from that plane. This factors into their claims about overall density improvements for the node. So, if you had two nodes that were exactly the same, other than backside power delivery, Intel would say the latter is higher-density, because they're counting overall logic density.These smaller transistors also allow for more advanced power delivery mechanisms,
No, that's just about enabling designs to have multiple chiplets and layers. Chiplets are mostly about achieving better economies of scale than directly benefiting performance. So far, the only case where you can directly attain better performance via advanced packaging is through die-stacking, such as RAM-on-logic.More advanced packaging techniques allow for similar features in a smaller area,
New materials and transistor designs are an essential part of continuing to shrink down to smaller scales.Every once in a while, a different kind of transistor, either in material, format, or both, comes out that also drastically changes some of the above dynamics.
The node you are designing the chips on is outcome determinative of the architecture.Node generally has nothing to do with architecture unless there are specific design features being put into play (BSPDN for example). The biggest impact node has on design is typically from a financial aspect in terms of die per wafer (a smaller node means you can design a larger chip with less penalty).
Nope it can, but there's no guarantee. Architecture and node both play a part in clock speeds, but a node being smaller doesn't guarantee you're actually getting higher clocks: see Intel 7 vs... everything else. ADL to RPL is also a great example of how architecture can affect clock speeds more than a node change.
Concentration of heat is also becoming a very big issue with smaller client chips which may have something to do with stagnating clock speeds.
You've got that backwards. The microarchitecture is identical. The node changes enabled voltage/frequency curve improvements, which then enabled higher peak frequencies and better frequency scaling of multi-threaded workloads.ADL to RPL is also a great example of how architecture can affect clock speeds more than a node change.
Yes, mostly. One of the more interesting and recent case studies is Rocket Lake, where Intel backported the cores of Ice Lake from their 10 nm to their 14 nm node. The result was that it ran too hot. Also, the size of the cores meant that Intel had to walk back from 10 cores per die to just 8. Combined with poor frequency scaling, that meant Rocket Lake i9 actually performed worse on some multi-threaded workloads than a Comet Lake i9.The node you are designing the chips on is outcome determinative of the architecture.
I've seen zero evidence indicating they changed the node between ADL/RPL. If you've got some please share.You've got that backwards. The microarchitecture is identical. The node changes enabled voltage/frequency curve improvements, which then enabled higher peak frequencies and better frequency scaling of multi-threaded workloads.
Besides that, they added E-cores, enlarged L2 caches, and tweaked the ring bus. But, if we're just looking at the core, then it's really a story of how they were able to improve on Intel 7.
How does lower latency memory access increase clock speeds? All this does is optimize the usefulness of the clocks that already exist. This could increase performance, but not clock speed.I've seen zero evidence indicating they changed the node between ADL/RPL. If you've got some please share.
The cache redesign, which is absolutely part of the architecture, is actually what allowed for the higher peak frequencies.
It uses less power it is largely the same performance wise.How does lower latency memory access increase clock speeds?
That's why I said it "could" increase performance. The question was how does better access to memory increase clock speeds?It uses less power it is largely the same performance wise.