News Intel Core i9-12900K and Core i5-12600K Review: Retaking the Gaming Crown

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

kinney

Distinguished
Sep 24, 2001
2,262
17
19,785
That article doesn't say anything about MCE though.

MTP may be PL2, I'll grant you that. MCE essentially offers a potential infinite turbo- it's not overclocking which was the original point that I was replying to.

Alder Lake and Rocket Lake DO stretch their legs quite well with it enabled (and do it under warranty). I can see why people that don't want Intel to look good, or don't want people to focus on performance are disgruntled. To me that's just stupid fanboyism as I've owned a 5950X, 5900X and 11900K and soon 12900K.. I just want the best performance in gaming. Performance is kind of the whole and only point here. If I want the best efficiency I'll buy an iPad.
 
  • Like
Reactions: Why_Me

VforV

Respectable
BANNED
Oct 9, 2019
578
287
2,270
If you go with DDR4 for the intel build, the platform cost for a 12600KF (plus decent air cooler) is about the same or lower than a 5800X + X570, based on current prices.
But you can also go B550 and it's cheaper. Intel does not have that option.

Not to mention you can even go B450 and then it's a laugh.

I would never take a 12600kf over a full 8c 16t Zen3 CPU, when the entire platform is cheaper on Zen3 and they are tied or margin or error difference between them.

P.S. Actually, I would not buy anything now, in 3 months tops Zen3D is here and then you know the entire picture for the next 6-8 months and can chose the best option. It will be Zen3D.
 
Last edited:

InvalidError

Titan
Moderator
But you can also go B550 and it's cheaper. Intel does not have that option.
Took AMD nearly a year to launch B550 after X570. B660 and the other chipsets will launch in a few months if you want something cheaper.

While B450 may be an option, B450 are generally of much lower quality than B550 ones since motherboard manufacturers still weren't comfortable throwing their full support behind AMD yet. You also need to keep in mind that B450 means no PCIe 4.0, which is becoming an issue with AMD crippling its lower-end cards with x8 interfaces.
 
  • Like
Reactions: Why_Me
Took AMD nearly a year to launch B550 after X570. B660 and the other chipsets will launch in a few months if you want something cheaper.

While B450 may be an option, B450 are generally of much lower quality than B550 ones since motherboard manufacturers still weren't comfortable throwing their full support behind AMD yet. You also need to keep in mind that B450 means no PCIe 4.0, which is becoming an issue with AMD crippling its lower-end cards with x8 interfaces.

Then again, for anyone owning one of those low-end cards theres no point on getting any of the new Alder Lake 12600K, 12700K or 12900K, nor the old Ryzen 5 5600X, 5800X or 5900X. Best to save the money to get a better GPU lol
 
Last edited:
  • Like
Reactions: VforV

InvalidError

Titan
Moderator
Then again, if you own one of those low-end cards theres no point on getting any of the new Alder Lake 12600K, 12700K or 12900K, nor the old Ryzen 5 5600X, 5800X or 5900X. Better save the money to get a better GPU lol
Some people stagger their upgrades, especially in this age of stupidly expensive GPUs where it makes more sense to buy something lower-end (or just reuse what you already have) and wait it out if you aren't one of the many people obsessed with running everything at 16k300 Ultra. I have a GTX1050 and plan to stretch it either until it breaks or I can get a sizable upgrade new for under $200.
 
Some people stagger their upgrades, especially in this age of stupidly expensive GPUs where it makes more sense to buy something lower-end (or just reuse what you already have) and wait it out if you aren't one of the many people obsessed with running everything at 16k300 Ultra. I have a GTX1050 and plan to stretch it either until it breaks or I can get a sizable upgrade new for under $200.

Yeah sorry my bad, I didnt meant to write "you", but more like anyone owning one of those cards. I will fix it.
 
In any case If I was able to build a new system (either for gaming or working) I would avoid the 12600K and get the 12700K instead. Or just wait and get a core i5 12400 for even less money than all the CPU named soo far.
Based on the one review I saw comparing the two, the 12600K seems like it might be the better value. Since both the 12600K and the 12700K get the same number of E-cores, multithreaded performance can end up a bit closer than usual between the 6 and 8-core parts, as the E-cores boost multithreaded performance to a similar degree for both. And of course, there isn't all that much difference in light to moderately-threaded performance between the two, as they operate at fairly similar clocks. Unless one specifically requires as much heavily multithreaded performance as they can get for more niche tasks like CPU-based video encoding and rendering, they are unlikely to notice any significant difference in performance between the two. And compared to the prior-gen 8-cores, the 12600K ends up beating the previous i7s by a decent margin at most light and heavily threaded tasks, and roughly matches or in some cases exceeds the performance of the 5800X as well. Had AMD priced the 5000-series more like the previous generation of Ryzen processors, the 12600K wouldn't have looked all that impressive at launch, but since those CPUs were offering industry-leading performance, the shortages would have lasted a lot longer while also not making AMD as much money.

As for the 12400, it will probably be a decent value, but lacking E-Cores, it won't see the same kind of improvement to multithreaded performance. It's also possible that by the time that processor launches early next year, the 5600X (or possibly a relaunched non-X part with similar performance) might be available for around $200.

5800X is also gimped to DDR4. At least 12600KF provides DDR5 option if you need it.
DDR5 is terrible value right now, and is poorly suited to the 12600K. You could pay less to move up to a 12700K with 32gb of DDR4-3600 and get better performance all-around than what you would get from pairing a 12600K with entry-level DDR5. And that DDR5 will "gimp performance" in things like games, that will see worse performance than DDR4 due to its much higher latency.

As for PCIe 5.0, even 4.0 doesn't really provide any tangible benefit over 3.0 in home systems yet, so doubling the maximum available bandwidth again is not likely to be of much use for the foreseeable life of the system. And it's only available on the first x16 slot, while even today's highest-end graphics cards don't show any perceptible difference in performance even from 4.0, and probably won't for years to come (with an x16 card).

I would agree that the 5800X is not currently worth the price it's been selling at up to this point at close to $400, but it's likely to see a price drop bringing it down around the 12600K's price. In fact, MicroCenter already briefly put the 5800X on sale for $300 in-store (currently $330), and that kind of pricing could become common online in the coming weeks. The 5800X tends be fairly competitive with the 12600K in most workloads, so at similar pricing it could be a viable competitor.

Sure, but if AMD has to reduce prices now then they are going to make less money for a whole year and they aren't making that much to begin with.

Also the next year all of the shortages will continue meaning that making a CPU will become more expensive and more difficult and time consuming because they will have to find all the components first.

And if they come out with these features in a year they will be one year late to the party, everybody that needs the features will already have them.
AMD's been making huge profits off the 5000-series processors given their relatively high price points, and the fact that they shouldn't really cost more to manufacture than the 3000-series parts. The 3000-series itself was profitable at much lower prices, and at prices similar to what those were selling for, these processors can once again become competitive with Alder Lake.

As for the shortages, the 5000-series has been readily available around its MSRP (or on sale for less) since earlier this year. Intel's competitive pricing and improvements to their 11th gen processors alleviated a lot of the excess demand. And now with Alder lake retaking the performance lead, I suspect AMD should be able to keep up with the supply of 5000-series parts, even at lower price points.

As for a new gen, that's what's planned for their 3D V-Cache CPUs that should be coming early in the year. Whether those will manage to provide superior performance to Alder Lake remains to be seen, but AMD has hinted that they might, though that will likely depend on the workload.

I don't think anyone suggested THG to custom tune sub timings and such for AMD.

However using 16GBX2 Dual Rank DIMMS instead of 8GBX2 Single Rank DIMMS on all systems including Intel would not have cost any additional testing time.

I think it was a poor choice to use the 8GBx2 SR DIMMs when I'm sure several 16GBX2 Dual Rank kits would not be to hard to find in the THG testing lab.
Yeah, that seems a bit questionable. You can't directly compare how much the DDR5 is making a difference compared to DDR4 when the DDR5 gets the benefit of a dual rank memory setup, which has been shown to often make more of a difference to performance than tighter timings and higher clocks. So it's not really an apples-to-apples comparison here, even for comparing how much of a potential difference DDR5 makes for Alder Lake, and it's possible that some of the DDR5 wins might be reversed if the DDR4 were configured similarly. Tom's should really do an updated memory scaling article, including DDR4 and DDR5 in both dual and single-rank configurations, and it would probably be good to include Rocket Lake and Zen 3 in the results as well.

I say to that, don't give people the benefit of the doubt. Some will/would do it. It's not going to be everyone, of course.
Sure, some might want to upgrade to faster RAM, but those people probably won't care so much about the lack of value in doing so. The price difference for a DDR5 setup is currently about as much as a motherboard itself. If they are willing to spend hundreds of dollars to replace their existing RAM with other RAM that gets them maybe a few percent more performance in typical CPU-limited workloads, then they probably won't be too bothered about having to replace their motherboard either. It seems unlikely that DDR5 will provide a major advantage for these processors down the line, and it provides mixed results currently, so it doesn't really seem like something anyone should feel they are missing out on with this generation of processors. I believe the mainstream B-series boards will only come in DDR4 variants, and the same goes for many Z-series boards, so it's more of an enthusiast feature for very high-end builds, for those willing to pay a lot more for something that can improve performance slightly at some tasks, while offering similar or in some cases slightly worse performance in others.
 
  • Like
Reactions: The_King
Based on the one review I saw comparing the two, the 12600K seems like it might be the better value. Since both the 12600K and the 12700K get the same number of E-cores, multithreaded performance can end up a bit closer than usual between the 6 and 8-core parts, as the E-cores boost multithreaded performance to a similar degree for both. And of course, there isn't all that much difference in light to moderately-threaded performance between the two, as they operate at fairly similar clocks. Unless one specifically requires as much heavily multithreaded performance as they can get for more niche tasks like CPU-based video encoding and rendering, they are unlikely to notice any significant difference in performance between the two. And compared to the prior-gen 8-cores, the 12600K ends up beating the previous i7s by a decent margin at most light and heavily threaded tasks, and roughly matches or in some cases exceeds the performance of the 5800X as well. Had AMD priced the 5000-series more like the previous generation of Ryzen processors, the 12600K wouldn't have looked all that impressive at launch, but since those CPUs were offering industry-leading performance, the shortages would have lasted a lot longer while also not making AMD as much money.

As for the 12400, it will probably be a decent value, but lacking E-Cores, it won't see the same kind of improvement to multithreaded performance. It's also possible that by the time that processor launches early next year, the 5600X (or possibly a relaunched non-X part with similar performance) might be available for around $200.
........

Better value no doubt about, but as I wrote if I was able to build a new system I would still pick the 12700 because I never had a core i7 and I wana own at least one in my life :) .

Now If I was able to build a new system and I didn't have enough money for the i7, then of course the i5 12600K or 12400 would probably be my choices (as long as the 12400 does not suck at multithreaded, because I still do some heavy multithreaded work every now and then). But yeah without the e-cores, less L3 cache and lower boost frecuency I think the 12400 wont be as exciting as the 12600K.
 

InvalidError

Titan
Moderator
Yeah sorry my bad, I didnt meant to write "you", but more like anyone owning one of those cards. I will fix it.
I didn't take it personally, just cited myself as an example of person who carries stuff over, though 4.0 vs 3.0 is irrelevant in my case since my existing GPU only does 3.0x16. The 3.0 vs 4.0 dilemma only affects people carrying over an RX5500/6500/6600 - the only modern desktop GPUs that have a 4.0x8 interface instead of an x16 one.
 

VforV

Respectable
BANNED
Oct 9, 2019
578
287
2,270
Took AMD nearly a year to launch B550 after X570. B660 and the other chipsets will launch in a few months if you want something cheaper.

While B450 may be an option, B450 are generally of much lower quality than B550 ones since motherboard manufacturers still weren't comfortable throwing their full support behind AMD yet. You also need to keep in mind that B450 means no PCIe 4.0, which is becoming an issue with AMD crippling its lower-end cards with x8 interfaces.
Yeah, but in a few months Zen3D will launch too and then you will have an even better reason to still buy AMD.

Also, while true about many B450 MBs, the ones from MSI, B450 Tomahawk and Mortar, especially the MAX series that I have are some of the best MB ever. Amazing price/perf/quality.

I can even run a 5950x OC on my MB, that's how good the VRMs are.

PCIe 4.0 might be a problem for some, not for me. Until games actually utilize the full speed of a PCIe 4.0 nvme SSD, they don't even use the PCIe 3.0 yet. We are still waiting for MS's support and RTX IO and all those advertised techs back in 2020.

So far the only ones that take advantage, I mean fully advantage are the games made for PS5. Not even XSX fully utilizes the assets streaming throughput of their SSD, just for loading. So yeah, there's a lot of catch up to be done by PCs (and XSX too, but that's besides the point).

As for PCIe 4.0 issue with GPUs, I think that's an issue, but a minor one and does not affect that many people.
 
  • Like
Reactions: The_King

The_King

Distinguished
Dec 10, 2007
73
22
18,635
Also, while true about many B450 MBs, the ones from MSI, B450 Tomahawk and Mortar, especially the MAX series that I have are some of the best MB ever. Amazing price/perf/quality.
I can even run a 5950x OC on my MB, that's how good the VRMs are.

Running the same board B450M Mortar Max in one of my Rigs. it is very special indeed. VRMs are fantastic.

Yeah, that seems a bit questionable. You can't directly compare how much the DDR5 is making a difference compared to DDR4 when the DDR5 gets the benefit of a dual rank memory setup, which has been shown to often make more of a difference to performance than tighter timings and higher clocks. So it's not really an apples-to-apples comparison here, even for comparing how much of a potential difference DDR5 makes for Alder Lake, and it's possible that some of the DDR5 wins might be reversed if the DDR4 were configured similarly. Tom's should really do an updated memory scaling article, including DDR4 and DDR5 in both dual and single-rank configurations, and it would probably be good to include Rocket Lake and Zen 3 in the results as well.

Test setup
AMD Socket AM4 (X570)AMD Ryzen 9 5950X, Ryzen 9 5900X, Ryzen 7 5800X, Ryzen 5 5600X
MSI MEG X570 Godlike
2x 8GB Trident Z Royal DDR4-3600 - Stock: DDR4-3200 14-14-14-36

Apparently if you have the money to buy a 5950X and the MSI MEG X570 Godlike.
you have no money left over and have to buy the a cheapest 8GBX2 Ram to go with this setup.

I'm sure all 5950X, 5900X, 5800X owners are running 8GBx2 DIMMs in there systems. ;)
 
  • Like
Reactions: VforV
MTP may be PL2, I'll grant you that. MCE essentially offers a potential infinite turbo- it's not overclocking which was the original point that I was replying to.
Intel renamed Pl2 into max turbo power because it's exactly that, the potential infinite turbo power.
MCE clocks all cores to the max single core clocks which was ok when we had only a few cores, although I don't know if it was considered OC even back then.
NOW MCE is not standard or allowed, turbo clocks per amount of cores is controlled by the CPU itself, which is why intel doesn't publish turbo tables anymore, bypassing the CPUs decisions by enabling MCE is overclocking.
 

kinney

Distinguished
Sep 24, 2001
2,262
17
19,785
Intel renamed Pl2 into max turbo power because it's exactly that, the potential infinite turbo power.
MCE clocks all cores to the max single core clocks
which was ok when we had only a few cores, although I don't know if it was considered OC even back then.
NOW MCE is not standard or allowed, turbo clocks per amount of cores is controlled by the CPU itself, which is why intel doesn't publish turbo tables anymore, bypassing the CPUs decisions by enabling MCE is overclocking.

Not quite. MCE removes boost duration limits if your hardware allows, TB/TB2/TB3/TVB/ABT sets the clock limits. See Intel’s New Adaptive Boost Technology: Floating Turbo Comes to Rocket Lake (anandtech.com)
 

InvalidError

Titan
Moderator
Funny.
Same price as Zen3 now, but Zen3 will drop in price. Perf will be +15% on average. You're in for a surprise (and all the intel fanbois).
I wouldn't be so optimistic on pricing: Zen 2 was going at fire sale prices in the months prior to Zen 3's launch, then supply of Zen 2 chips dried up from AMD shifting most prodction to Zen 3/PS5/XB-SX and prices bounced back almost to parity with Zen 3. Since Zen 3D is based on the same process as Zen 3, AMD will likely reduce or stop production of old SKUs that get a 3D refresh ahead of launch to eliminate overlap and raise prices.
 
  • Like
Reactions: derigueur

kinney

Distinguished
Sep 24, 2001
2,262
17
19,785
Funny.
Same price as Zen3 now, but Zen3 will drop in price. Perf will be +15% on average. You're in for a surprise (and all the intel fanbois).

Why do we have to be "Intel fanboys", why not just performance fanboys? I'm not a budget fanboy. I've built and owned a 5900X rig, and had a 5950X as well. I also have an 11900K, and soon a 12900K. Intel is the AAA gaming champ, has been. I mainly focus on Flight Sim 2020 and Cyberpunk 2077 because they're good representatives of current near term performance, and that's what I build these for. All with a 3090, and all with Intel/AMD warrantied settings-
Flight Simulator 2020 4K minimum frames
Cyberpunk 2077 4K minimum frames
Cyberpunk 2077 1080P minimum frames

That all important minimum framerate says it all. My Ryzen rigs were ok, although buggy compared to my Intel rigs, but Ryzen is what it is. It's a great value if you want a lot of cores. If you want max gaming performance, you buy Intel. Both work fine though. There's nothing else to this story.

If I'm making recommendations to people without any personal criteria, I'm telling everyone to buy a 12600K. Fantastic chip.
 
  • Like
Reactions: Why_Me
Not quite. MCE removes boost duration limits if your hardware allows,
Just setting TAU or long duration power maintained will do that.
(here 16 seconds if not auto)
Just because MCE does this as well doesn't mean that it's the only thing MCE does.
TAU should always be available as a separate setting in your bios, even if you prefer MCE.
Ty3DdQa.png

Even the article you link to says that:
"There will be some users who are already familiar with Multi-Core Enhancement / Multi-Core Turbo. This is a feature from some motherboard vendors have, and often enable at default, which lets a processor reach an all-core turbo equal to the single core turbo. That is somewhat similar to ABT, but that was more of a fixed frequency, whereas ABT is a floating turbo design. That being said, some motherboard vendors might still have Multi-Core Enhancement as part of their design anyway, bypassing ABT. "
 

VforV

Respectable
BANNED
Oct 9, 2019
578
287
2,270
Why do we have to be "Intel fanboys", why not just performance fanboys? I'm not a budget fanboy. I've built and owned a 5900X rig, and had a 5950X as well. I also have an 11900K, and soon a 12900K. Intel is the AAA gaming champ, has been. I mainly focus on Flight Sim 2020 and Cyberpunk 2077 because they're good representatives of current near term performance, and that's what I build these for. All with a 3090, and all with Intel/AMD warrantied settings-
Flight Simulator 2020 4K minimum frames
Cyberpunk 2077 4K minimum frames
Cyberpunk 2077 1080P minimum frames

That all important minimum framerate says it all. My Ryzen rigs were ok, although buggy compared to my Intel rigs, but Ryzen is what it is. It's a great value if you want a lot of cores. If you want max gaming performance, you buy Intel. Both work fine though. There's nothing else to this story.

If I'm making recommendations to people without any personal criteria, I'm telling everyone to buy a 12600K. Fantastic chip.
Well you're an exception, and the exceptions are the minority.

Also, those graphs, especially on 1% low on CP 77 are complete BS!

This is what you posted above:
Cyberpunk 2077 1080P minimum frames


I trust only HUB and GN:

 

kinney

Distinguished
Sep 24, 2001
2,262
17
19,785
Well you're an exception, and the exceptions are the minority.

Also, those graphs, especially on 1% low on CP 77 are complete BS!

This is what you posted above:
Cyberpunk 2077 1080P minimum frames


I trust only HUB and GN:

Well you'd be better served by keeping an open mind, because GN disables MCE, which is not only a default-enabled option, it's a warrantied option. No one that cares about the truth or is honest, would prefer benchmarks where the competition is hobbled like that. Let AMD and Intel perform their best. If you don't like that idea, well, you're just a fanboy. And in that case, this Alder Lake domination must hurt. And if it does hurt a fanboy's heart, that's definitely going to help me sleep at night. 😊

I don't have to "trust" any single source of information, I know what I'm looking at. I am the exception. I'm fully informed on why there's a disparity from your GN benches. My links are from Digital Foundry. They're not BS, they're correct. All of this has been out there since Rocket Lake's launch, you just prefer results that are what you want to see. 🙂
That sucks to be in distress, rooting for the team that lost and playing spin doctor. I've been there, before I had money. Now I just buy the best of the best.

Remember, I had a 5900X rig and I benched and used it myself side by side with my 11900K rig. I got rid of the 5900X. I think anyone with the gear to test would agree with me. No one is going to buy an i9 and disable MCE or ABT. Your sources are an incomplete picture.

Just setting TAU or long duration power maintained will do that.
(here 16 seconds if not auto)
Just because MCE does this as well doesn't mean that it's the only thing MCE does.
TAU should always be available as a separate setting in your bios, even if you prefer MCE.

Even the article you link to says that:
"There will be some users who are already familiar with Multi-Core Enhancement / Multi-Core Turbo. This is a feature from some motherboard vendors have, and often enable at default, which lets a processor reach an all-core turbo equal to the single core turbo. That is somewhat similar to ABT, but that was more of a fixed frequency, whereas ABT is a floating turbo design. That being said, some motherboard vendors might still have Multi-Core Enhancement as part of their design anyway, bypassing ABT. "

Actually, I'm going to have to test this myself. I'm typing this on an 11900K with ABT and MCE enabled, and would be tickled if I could get 5.3GHz all-core turbo in games. Right now I'm getting 5.1GHz. An unwavering, never faltering 5.1GHz on 8 cores. I'll definitely report back on this, because I'm going to love being proven wrong on this one. 🥳

Either way, MCE is definitely fair game and should be left enabled in reviews, as it's the default, and its usage is covered under warranty.
It definitely causes Rocket Lake, and Alder Lake to blast off.. and that is inconvenient to haters everywhere.
 
Last edited:

VforV

Respectable
BANNED
Oct 9, 2019
578
287
2,270
My links are from Digital Foundry.
1. I'm a fanboy of price/perf and integrity (as much as it can exist in corporate world).
Both intel and nvidia did and do much worse things and more **** than AMD did, ever. For that I cannot cross my principles and like them. Some people have no principles, it's ok, that's how the world is today.
If AMD is starting to do as bad **** or worse than intel and nvidia, I would not support them either. I'm not a blind fanboy, I have a conscious preference of the lesser evil.

2. I'm not hurt at all, it's a +7% on average increase in perf for Alder Lake (12900k) over Zen3, after 1 year, being a more expensive platform, more power hungry, hotter and needing the user to be beta tester to Win11, DDR5 and the big.little experiment with all the drawbacks and issues.
So no thanks, it's laughable from intel, but at least is not the failure of Rocket Lake and since competition is back the only positive is that it will push AMD harder and their prices lower (that's the only good thing from this).

3. Zen3D will come in 2-3 months and that +7% on average is as good as gone, it will be a short lived semi-victory for intel. Again I'm not hurt at all, because I'm really not impressed by Alder Lake at all.
If it were a clear win on all fronts: every test won by over 15% (I mean ever one and all games), better power consumption or at least equal, cool thermals, then YES we would have had a real winner and I would have praised it at least from that POV, even if I don't like the company.

4. The nail in the coffin for you is your source and by that, the kind of info you trust and how you think. That's all I needed to see. You're a non factor from now. DF is the definition of BS, in their field (tech benchmarks of games), NX Gamer is schooling them every time he makes a video, both in professionalism and not shilling, like they do; let alone in testing CPUs and GPUs - where they are known shills and you can see that clearly in the graphs. They don't even come close to a mile to HUB and GN. Your source is beyond pathetic on that front.

I don't even need to give more explanations, it will be like talking to the wind to an individual like you. I'm not wasting my time anymore and this is why you are now blocked and ignored.
 
Last edited:

InvalidError

Titan
Moderator
2. I'm not hurt at all, it's a +7% on average increase in perf for Alder Lake (12900k) over Zen3, after 1 year, being a more expensive platform, more power hungry, hotter and needing the user to be beta tester to Win11, DDR5 and the big.little experiment with all the drawbacks and issues.
Like it or not, even AMD is planning to go big.LITTLE within the next two years. As with anything else, someone has to be first to go and figure it out eventually.

I didn't want to take chances with the amount of new stuff in Alder Lake and on-going component shortages that seemed poised to get worse before getting better, so I got an i5-11400 back in May.
 
  • Like
Reactions: RodroX

VforV

Respectable
BANNED
Oct 9, 2019
578
287
2,270
Like it or not, even AMD is planning to go big.LITTLE within the next two years. As with anything else, someone has to be first to go and figure it out eventually.

I didn't want to take chances with the amount of new stuff in Alder Lake and on-going component shortages that seemed poised to get worse before getting better, so I got an i5-11400 back in May.
My issue is not with big.Little itself (although AMD's version will be a little different), it's with being a beta tester of... anything. I never do that, or like that. I let others be beta testers and I join only after all or most of the issues are gone and the prices are less ridiculous too, by that time.

I did not buy Zen1 either, because of that, my 1st AMD CPU was Zen+ which at that time was not only better, but had almost no issues that were present on Zen1. Not to mention that buy buying a Zen+ MB (B450) I have an upgrade path up to Zen3d and that's the best win and feature from AMD for me.

That's why I never pre-order, never buy day 1, I have a thing called patience. Most people today lack that and it shows in what they say and what they do.

Even if I don't like intel, I would say from a tech POV, Raptor Lake would make much more sense to me if you want intel, than Alder Lake. Not just because it's a new gen, but because of the beta tester issues I said above, a lot of them if, not all will be gone by then.

The reality is Zen3 is on a stable proven platform now, walking tall while Alder Lake is barely starting to walk and is still shaking and farting along, even if with half a foot in the lead... for now.
 
  • Like
Reactions: King_V

TRENDING THREADS