News Intel quietly rolls out 'new' Core 5 CPUs that look suspiciously like 12th Gen chips — Core 5 120 and Core 5 120F enter the budget gaming market wi...

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Yeah, for real. @rluker5 , you and @TerryLaze truly make me not want to buy Intel, again. You guys seem oblivious to the downsides of pressing your case too hard.

Intel had plenty of problems with degrading CPU's and avoided admitting fault at every turn.
Well... they certainly took their time, but they did acknowledge there was something going on and they were looking into it. In the end, they also did the right thing by whipping motherboard makers, issuing microcode fixes, and extending everyone's warranty. So, I actually think Intel didn't handle it too badly. Could've gotten there faster, but they eventually said the right things and took the correct actions.

I hope they can figure it out because a monopoly is bad for all of us.
My long-term concern is about IFS, because a world where everyone else is way behind TSMC is incredibly precarious. I really wanted Gelsinger to succeed in making IFS near enough to self-sustaining that it could be spun out and become a true peer of TSMC and Samsung.

As for CPUs, well... x86 is on its way out. Not tomorrow, but I don't know if it has another decade before it's a niche player. Once you move beyond x86, there's lots of competition. Also, I'd be remiss if I didn't mention that there is a 3rd player in the x86 market: Zhaoxin. They're nowhere near being competitive today, but that picture could change in a decade's time.
 
  • Like
Reactions: helper800
To my understanding this is a fire sale, 'everything must go!'

But it's also a little strange, that these are lower binned CPUs, when a mature process should turn out nothing but golden samples.

I've kept wondering what Intel is doing with their own production lines, because just keeping them operational costs a ton of money, shutting them down is a permanent drain, too, and they are really designed to run near 100% capacity.

So what do you do when there are no takers?

And while these CPUs most likely are good enough for near everything, Intel's profits still depend on selling those top-end CPUs at margin, so they've been hesitant to push cheap and risk that margin...

...which seems exacatly what they are doing now, unless this is a trickle, not opening the flood gates.

Of course these could be reserved stock for replacements on those middle rank office PCs, that sell in millions and been fabbed years ago. And selling them with a new label is just about getting a few bucks extra from those who trust marketing.
 
Also, I'd be remiss if I didn't mention that there is a 3rd player in the x86 market: Zhaoxin. They're nowhere near being competitive today, but that picture could change in a decade's time.
Ten years out, I won't predict anything but the sun still burning.

IT isn't about technology any more, but politics. And there I find even short term predictions very hard to make.
 
  • Like
Reactions: snemarch
To my understanding this is a fire sale, 'everything must go!'

But it's also a little strange, that these are lower binned CPUs, when a mature process should turn out nothing but golden samples.

I've kept wondering what Intel is doing with their own production lines, because just keeping them operational costs a ton of money, shutting them down is a permanent drain, too, and they are really designed to run near 100% capacity.
Intel doesn't actually have enough capacity to meet demand because OEMs want Intel 7 based processors over everything else because they're cheaper. These likely solely exist for that market so their OEMs can sell the same CPUs again with a new name.
 
It's not simply the motherboards' fault. Degradation was continuing to happen, even after they switched to Intel's recommended settings.

What blows me away is your hubris for claiming to understand the problem better than Intel's best engineers.
Why do you think the engineers are in charge of the public relations and legal departments? Do you think they have the only say in this, and do you think they want to get their hands dirty in this screw up by somebody in charge over there? Do you think one will say: "I knew the power delivery parameters were crap but didn't speak up because my managers for 3 levels up would have held me accountable for delays"?

I'm just trying to offer the best solution I know to actually help and not continue to obfuscate. I don't work there so I'm not going to get in any trouble over it.
 
There were a few cases of AMD X3D CPU's failing because motherboards set SOC voltage to high. Intel had plenty of problems with degrading CPU's and avoided admitting fault at every turn.

Whatever, it doesn't matter. No DIY person is choosing Intel these days. I'm glad they are suffering for being unethical in the past, but I hope they can figure it out because a monopoly is bad for all of us.
Am I not allowed to speak of my first hand experience? I keep hearing of these people having trouble with the effect of too high volts and and am telling how to reduce them.
And speak from what I've seen from over a decade overclocking that when you go over 1.5v on a CPU then degradation happens, even on 22nm, 28nm.
 
Microcode 0x12F permanently locks all-core load speeds on the 13900K—from 5.5/4.3 GHz down to 5.0/4.0 GHz—at least on my Asus board. Undervolting, overvolting, and clock adjustments are all disabled, regardless of BIOS settings. It makes paying extra for the K feel like a waste. I've read that rolling back to an older microcode can restore the settings.
It doesn't seem to be the microcode. I updated my office pc to that one and my 13600kf is fine with your old 13900k clocks:
Screenshot-36.jpg

At least with an MSI board. Maybe you have some new setting in your Asus bios that needs to be toggled to get your old clocks back. I noticed the new bioses default to closer to Intel stock settings but are easy to change back to what seems like their old versions.
 
Am I not allowed to speak of my first hand experience? I keep hearing of these people having trouble with the effect of too high volts and and am telling how to reduce them.
And speak from what I've seen from over a decade overclocking that when you go over 1.5v on a CPU then degradation happens, even on 22nm, 28nm.
Degradation, to my understanding, is impossible to reduce to 0 at any voltage, amperage, or reduction of heat. To say that degradation suddenly appears at 1.5v is illogical. What method of testing did you do to even know degradation was occurring past 1.5v? What about 1.475v? That works indefinitely? I too have also OCed for over a decade, and I believe you are mistaking degradation with instability. A CPU can be any combination of stable/instable, and limited degradation/ a lot of degradation. Let me know if I am making any incorrect assumptions here.
 
Degradation, to my understanding, is impossible to reduce to 0 at any voltage, amperage, or reduction of heat. To say that degradation suddenly appears at 1.5v is illogical. What method of testing did you do to even know degradation was occurring past 1.5v? What about 1.475v? That works indefinitely? I too have also OCed for over a decade, and I believe you are mistaking degradation with instability. A CPU can be any combination of stable/instable, and limited degradation/ a lot of degradation. Let me know if I am making any incorrect assumptions here.
Degradation happens due to similar atomic factors as diffusion and creep where atoms "jump" over a spot in the atomic structure. But instead of the main driving force being reduction of free energy in diffusion, or accommodating physical stress in creep, electrical forces, primarily voltage drive this behavior in degradation with the increased wiggling rate of atoms at higher temperatures being a facilitator.
With diffusion and creep, since the free energy configuration and physical stresses are assumed constant, activity is exponential relative to temperature, generally becoming significant around half the melting point of the material, but with degradation more forcing energy from voltage is highly variable and temperature effects are relatively constant since silicon is nowhere near half of its melting temperature. While it is true that degradation occurs at any voltage it goes down exponentially as you continue to pull away from the energy needed to make an atom jump out of it's place. Atomic motion in solids is directed by forces, but fairly random when it comes to the individual atoms and their probabilities of moving follow the common probability bell curve.

And by degradation I meant degradation that would be unacceptable to people over the expected use of a CPU. Where the CPU would become unstable due to stock voltage no longer being sufficient to maintain stock clocks. Here's an old article on it: https://web.archive.org/web/20120923141849/http://www.anandtech.com/show/2468/6

And the 1.5v RPL thing is from Intel's 1.55v limit to run at stock for 5 years, from reports of difficulties of running around that voltage, from my anecdotal experience running chips back to Haswell at those voltages, from what stock voltages have been for my 13900kf on 4 boards from 3 vendors (had to have the best one for the gaming pc), and finally from wanting my chips to do better than stock at stock voltage after 5 years. That and more evidence like 13600ks not having any issues at lower voltages with worse silicon.

Personally the lower voltage the better imo. I'm on my office pc rn that I just redid the bios on to get the latest microcode on it and even though the 92mm Thermalright Silver Soul cooler can keep my 13600kf from throttling with anything but CB24 at 5.5ghz and a bit under 1.3v, I run it at 5.2ghz at a bit under 1.2v because the difference in speed is not worth the difference in fan or heat to me. I often drop my gaming pc 13900kf to 4ghz using a Windows power plan for things that I can't tell the difference with and it doesn't go up to 1v and stays correspondingly silent. Sometimes I run it at 5.2 because that is where RPL leaves its efficiency range. Sometimes 5.5 and rarely 5.7 with no HT. But those last 2 I don't use that often because I generally only see a benefit when I am benching and the added volts, heat are just a waste. Maybe when the CPU is old and getting stressed by applications I will turn it up. I did the same with my other chips.
 
Intel Ark has the Core 5 120 listed as "Products formerly Raptor Lake".
Okay, it seems Intel has now added the Ordering and Compliance tab, which confirms this is H0 stepping silicon.

Furthermore, I don't know if they just added this or I simply missed it, but the Specifications tab you linked lists the L2 Cache at 7.5 MB. That confirms this is definitely Alder Lake, since Golden Cove has 1.25 MB of L2 per core, whereas Raptor Cove has 2 MB of L2 per core. And no, I've never seen any example where they've disabled part of the L2 cache per core, from one model to another.
 
Yeah, I was disappointed Raptor Lake didn't include a refresh of the H0 stepping (6P + 0E), until the degradation problems came to light. Then, I started to be quite happy I managed to dodge that bullet, when I opted for the i5-12600 anyway.


It would be incredibly weird if Intel released LGA1700 CPUs that didn't work in all the LGA1700 boards, including DDR4 and (if you think back really hard) the PCIe 4.0-only boards.

The downside is that I expect the official DDR5 support will still be limited to DDR5-5600.
There are still theories relating the Vmin shift issue to voltages. Some overclocker find out the DVFS algorithm makes no sense for requesting correct vid for its workload.
And this is inside the CPU thus hard to fix from firmware updates.

The CPU is randomly requesting self destruct level voltage by default and if ignored can cause a BSOD sometimes.

There’s also a guess that the improved e-core ring bus is the reason for this voltage control goes haywire.

Since intel still trying to patch things with 0x12F firmware I guess 13/14 th gen should be avoided altogether now.
 
  • Like
Reactions: bit_user
It doesn't seem to be the microcode. I updated my office pc to that one and my 13600kf is fine with your old 13900k clocks:
Screenshot-36.jpg

At least with an MSI board. Maybe you have some new setting in your Asus bios that needs to be toggled to get your old clocks back. I noticed the new bioses default to closer to Intel stock settings but are easy to change back to what seems like their old versions.
Strange. I spent a few days on this when it first released and never found a solution. With the previous microcode, it was easy: set all Intel defaults in BIOS, disable undervolt protection, and use ThrottleStop in Windows to apply -0.1621V to both P-cache and core. But ThrottleStop doesn’t support the new microcode, and I couldn’t replicate the undervolt through BIOS. Without an undervolt, the CPU power limits. Temperatures are not far behind.

Edit: figured it out, thanks for making me look at this again. sorry to derail
 
Last edited:
  • Like
Reactions: rluker5
There are still theories relating the Vmin shift issue to voltages. Some overclocker find out the DVFS algorithm makes no sense for requesting correct vid for its workload.
And this is inside the CPU thus hard to fix from firmware updates.

The CPU is randomly requesting self destruct level voltage by default and if ignored can cause a BSOD sometimes.

There’s also a guess that the improved e-core ring bus is the reason for this voltage control goes haywire.

Since intel still trying to patch things with 0x12F firmware I guess 13/14 th gen should be avoided altogether now.
exactly, even I am using 14900k undervolted since release and luckily survived till now, I won't recommend to anyone going 13/14th gen, whereas 12th gen is fading away from performance standpoint, in current lineup I would only go AMD just due to instability of the intel platform
 
exactly, even I am using 14900k undervolted since release and luckily survived till now, I won't recommend to anyone going 13/14th gen, whereas 12th gen is fading away from performance standpoint, in current lineup I would only go AMD just due to instability of the intel platform
I actually find the 265k from intel to be an unsung hero of theirs. It gets very close to or matches the 9900X in games and productivity while being reasonably efficient, couple that with the z890 boards having better features for the same price or less than the AM5 components and I think you have a real winner. Sure, you can get more for gaming with a 7800X3D or a 9800X3D, but they cost significantly more currently while being much slower in multithreaded tasks. Mixed productivity builds that also game on the side are prime 265k recommends from me right now.
 
I actually find the 265k from intel to be an unsung hero of theirs. It gets very close to or matches the 9900X in games and productivity while being reasonably efficient, couple that with the z890 boards having better features for the same price or less than the AM5 components and I think you have a real winner. Sure, you can get more for gaming with a 7800X3D or a 9800X3D, but they cost significantly more currently while being much slower in multithreaded tasks. Mixed productivity builds that also game on the side are prime 265k recommends from me right now.
But the CUDIMMs are more expensive, offsetting quite a bit of the cost, plus the fact that the socket 1851 is kind of EOL, and AM5 have likely some more refreshes coming up or even option to opt for a 9950X3D when it drops in price when AM6 launches is tempting. for Gaming when anyone pulling over a grand on GPUs I don't see the benefit of not using the X3D chips for the quite signigicant FPS benefit.

IMO only purely production setup at a budget is justifyiable for Intel right now
 
  • Like
Reactions: bit_user
Ten years out, I won't predict anything but the sun still burning.

IT isn't about technology any more, but politics. And there I find even short term predictions very hard to make.
Don't be silly. My statement about Zhaoxin was based on the improvement curve I expect them to follow, under the obvious assumptions that they continue to receive the same level of market/government support and eventually get access to manufacturing technology that's roughly on par with what the rest of the market is using. Basically, all of the fair weather assumptions you'd normally expect to be loaded on the phrase "could be".

Acting as if I said they "will be" is just being willfully obtuse. I expect better from you than that.
 
But the CUDIMMs are more expensive, offsetting quite a bit of the cost, plus the fact that the socket 1851 is kind of EOL, and AM5 have likely some more refreshes coming up or even option to opt for a 9950X3D when it drops in price when AM6 launches is tempting. for Gaming when anyone pulling over a grand on GPUs I don't see the benefit of not using the X3D chips for the quite signigicant FPS benefit.

IMO only purely production setup at a budget is justifyiable for Intel right now
You do not need to buy CUDIMMs, the LGA1851 motherboards are better, and at 1440p+ the FPS difference between the 265k and a 9800X3D for 55-60% more cost is about 9% at 1440p and 7% at 4k with a 5090 according to TPU. 285k vs 265k is roughly equal in gaming performance for the above TPU comparisons. Here is an earlier review with a 4090 at 4k for perspective. More recent reviews are not easily accessible at 1440p and 4k between these two CPUs, but I would imagine the 265k has gotten some performance optimizations since.
 
You do not need to buy CUDIMMs, the LGA1851 motherboards are better, and at 1440p+ the FPS difference between the 265k and a 9800X3D for 55-60% more cost is about 9% at 1440p and 7% at 4k with a 5090 according to TPU. 285k vs 265k is roughly equal in gaming performance for the above TPU comparisons. Here is an earlier review with a 4090 at 4k for perspective. More recent reviews are not easily accessible at 1440p and 4k between these two CPUs, but I would imagine the 265k has gotten some performance optimizations since.
Thing is when not using the fastest CUDIMMs, it's slower than the Raptor lake, yet more expensive (due to raptor lake degradation fame), both are not in socket upgradable meaning upgrading in the road will cost you another mobo, the price difference between a 9800x3d vs 265k is only $50 USD in my region, hardly anything when one going to aim at high end with a 5070 Ti class + card, 7% is really quite a lot in gaming especially in demanding games IMO, not those 250 vs 300 FPS situation, paying for a 4090 for gaming will be hard to justify just to limit it by 265k IMO
 
  • Like
Reactions: bit_user
You do not need to buy CUDIMMs, the LGA1851 motherboards are better, and at 1440p+ the FPS difference between the 265k and a 9800X3D for 55-60% more cost is about 9% at 1440p
But the 9800X3D has 13.6% better minimum FPS @ 1080p. Unfortunately, they didn't test 1440p, but if you're planning on keeping the system for a few years, then we'd expect newer games to be more CPU-bound than current ones, meaning that future 1440p performance curves should come more to resemble that current 1080p curve.

I advised someone I know to go with a 9800X3D, because he primarily wanted it for gaming, and he's the type of guy who will keep it for probably a good 5 years or so. I figured he'd therefore want a CPU with some legs. As YSCCC said, if his priorities ever change, he can swap out the CPU for a Zen 6 with up to 24 cores and the 9800X3D should hopefully still have a decent resale value.
 
  • Like
Reactions: helper800
Thing is when not using the fastest CUDIMMs, it's slower than the Raptor lake, yet more expensive (due to raptor lake degradation fame), both are not in socket upgradable meaning upgrading in the road will cost you another mobo, the price difference between a 9800x3d vs 265k is only $50 USD in my region, hardly anything when one going to aim at high end with a 5070 Ti class + card, 7% is really quite a lot in gaming especially in demanding games IMO, not those 250 vs 300 FPS situation, paying for a 4090 for gaming will be hard to justify just to limit it by 265k IMO
All the numbers I gave you from TPU were without CUDIMM sticks... The test setup used 2x16gb 6000 MT/s cl 28 RAM which is comparable to anything one would use on an AMD system, so the numbers are completely valid. All this complaining about not using CUDIMMs when I cite numbers that do not use such sticks of RAM is concern trolling and the difference is 7% at 1440p with such data when the average FPS is above 120...

The price difference between a 265k and a 9800X3D is 150 dollars in the US. Most people only buy a PC once every 4-6 years, by which point upgrading makes no sense so you are looking at a new motherboard anyways. The 5070 ti is not a high end card either, imo, its a middle tier card. 7% is a nearly imperceptible amount of performance above 120 fps. Even at say 50 fps, the difference between 50fps and 53.75 fps is hard to spot depending on the game. At 4k the 4090 is not limited by the 265k at all. The 5090 can show CPU bottlenecking issues with every CPU out right now, that is true.
 
But the 9800X3D has 13.6% better minimum FPS @ 1080p. Unfortunately, they didn't test 1440p, but if you're planning on keeping the system for a few years, then we'd expect newer games to be more CPU-bound than current ones, meaning that future 1440p performance curves should come more to resemble that current 1080p curve.

I advised someone I know to go with a 9800X3D, because he primarily wanted it for gaming, and he's the type of guy who will keep it for probably a good 5 years or so. I figured he'd therefore want a CPU with some legs. As YSCCC said, if his priorities ever change, he can swap out the CPU for a Zen 6 with up to 24 cores and the 9800X3D should hopefully still have a decent resale value.
I never said anything about 1080p. My claim specifically was,

"Sure, you can get more for gaming with a 7800X3D or a 9800X3D, but they cost significantly more currently while being much slower in multithreaded tasks. Mixed productivity builds that also game on the side are prime 265k recommends from me right now."

If all one is going to do is game at 1080p and 1440p with a 4090 tier+ card, then yeah of course go with your flavor of X3D part. I did not expect to get jumped for saying that the 265k was a good buy at certain prices for very specific needs... I even have the numbers on my side to back it up...

Also, just because future titles will use more CPU, that does not mean the bottleneck will shift to the CPU because newer games typically have an even greater jump in requirement for the graphics card, and this in turn gives more time for the CPU to draw frames. How much more a CPU is required to saturate a given graphics cards in the future is not something I speculate with, I leave that to the professionals with the equipment to test such things.
 
Last edited:
I never said anything about 1080p. My claim specifically was,
I know you didn't but their 9800X3D review had minimums only for 1080p, which is a predominantly CPU-limited resolution. I think that data is relevant in a forward-looking fashion, as current-gen CPUs become more bottlenecked at higher resolutions like 1440p.

Also, just because future titles will use more CPU, that does not mean the bottleneck will shift to the CPU because newer games typically have an even greater jump in requirement for the graphics card,
But people traditionally upgrade GPUs more frequently than CPUs. I do, at least. My typical upgrade cycle involves changing GPU about twice as often as CPUs/systems.
 

TRENDING THREADS