News Puget says its Intel chips failures are lower than Ryzen failures — retailer releases failure rate data, cites conservative power settings

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
You need to be a semiconductor engineer (and a really good one) to refute what intel said is safe and put into their marketed profiles as unsafe IMO, tech savy or geeks only guess by experience, and Intel designed and made their chips, by no means we should assume we know better. Only this time round they've put out something defective.
So if I predict 15th gen won't be safe at 1.4v and 5.8ghz all core at 400 watts, does that show that I'm a semiconductor engineer or what?
 
It depends on what time range you're assuming and how biased their data was to that time range. If it happened mostly in the first year of Ryzen 7000 and that's when they got the majority of their orders for those systems, then the shop failure rate could've been similar to Intel's 11th gen. So, it really depends on several unknowns and it's dangerous to make assumptions without that data.


Because the usage pattern is different. You just steamroller over this point, but you really shouldn't. The Alderon Games data represents an accelerated timetable relative to what most other people are doing with Raptor Lake. It effectively gives us the possibility to glimpse the future.
But we already have the "future" with 13th gen. They have been out for what, 20+ months?

Again, I fully acknowledge - actually I was the one that first suggested it - that their 14th gen data will with 100% certainty go up (again, patch not withstanding) to at the very minimum the same field failure rate as 13th gen - but that still doesn't make it a widespread problem. What makes it a widespread problem is the 8 to 2 ratio of Intel vs AMD sales. Obviously more people will complain cause more people have Intel on their servers. These are my exact thoughts about the issue

1) Ryzen 7000 dont experience degradation. It just has a very high upfront failure rate. Meaning, it basically comes dead from the factory - at an alarming rate I'd argue. That's "easy" to notice before you deploy a server - so you never end up deploying one with a failed CPU - so you never have any crashes. With Intel the percentage even though lower, is split. 1% comes doa from the factory, another 1% dies after deployment. So naturally, youll have complaints cause some of your deployed intel servers are crashing.

2) Very important, not everyone is using puget's settings. Puget is running intel and amd defaults which naturally leads to lower failure rates compared to everyone else.

3) AFAIK everyone and their mother is using Intel. Intel outsales what, 8 to 2? Naturally there will be more complains about the market leader simply cause there are way more affected users, even if the percentages of failure might be similar.

I don't think any of the above points are contestable and i really try to be as unbiased as humanly possible.
 
They're not ISO-power! The i9-14900K has a stock PL1 of 125W, whereas the i9-14900KS has a stock PL1 of 150W!

Puget benchmarks these systems as they configure them for customers, not according to your science experiments.
Unless their cbr24 run is the 10 minute test (they don't clarify) the whole run is within the TAU which boosts both to 250w.
 
But we already have the "future" with 13th gen. They have been out for what, 20+ months?
Again, you're presuming the failure rate is more like a linear slope and not like a cliff. The nature of digital logic actually says it should be more cliff-like.

3) AFAIK everyone and their mother is using Intel. Intel outsales what, 8 to 2? Naturally there will be more complains about the market leader simply cause there are way more affected users, even if the percentages of failure might be similar.

I don't think any of the above points are contestable and i really try to be as unbiased as humanly possible.
This is an absurd point, in regards to Puget. Their customers no doubt have the same expectation of reliability no matter what brand of CPU, and are going to initiate a RMA when the system becomes faulty. You can't seriously use this to explain away the difference in their "field" failure rates.
 
We already knew this was a significant chunk of the problem. A big part of the blame lies on this, and techtubers and various review websites, that encouraged by posting numbers users thought they needed to match, and run their high end k CPU at peak endlessly. CPUs that people probably had no business owning if they just dropped it in an enthusiast MB and let it eat.
Sure intel reaped a bit of that glory as well, but anyone who didn't know what that k or ks designation meant should never have bought one of them.
FWIW Intel was always up front on those, they are for enthusiasts and when you buy one there was always the caveat that you were assuming some risks if you were going to run it hard, especially if you don't know what you are doing tweaking a modern bios that lets you tinker with pretty much every last parameter.
They are usually the K and Ks which have intel guaranteed profiles (e.g. 6ghz on 14900k and 6.2ghz boost on ks), it is one thing that the user or even the board makers putting aggressive extra voltage offset on it, but from the latest development, the microcode and VID table indeed hinted that the Intel have made false guaranteed profiles in their VID table, that alone can't blame the buyers, they SHOULD rung fine under those values for a long time and be stable or common codes that other machines did (e.g. UE5 games)
So if I predict 15th gen won't be safe at 1.4v and 5.8ghz all core at 400 watts, does that show that I'm a semiconductor engineer or what?
No, that is just a random guess and this time around you match what Intel set as default. Just 2.5 decades ago when pentium 4 comes and fall, and the core 2 arrives, the transition and ppl believed that the 4Ghz barrier isn't breakable so the future is <4Ghz multicore, now basically every desktop CPU and their cousin runs above 4Ghz, and back then when Athlon 1Ghz arrives they are mocked to be able to put on a heatsink and fry egg, and tech savy would know back then knows running the core at 70C is very dangerous, now every one runs at that and thought it's cool and safe. Tech evolves and tech savy "knowledge" don't evolve until it have been proven to have to update to the new standard.

And mind you, IF it holds that tech savy like you knows better and Intel just put some way unsafe VID (ref. igor's lab have found i9s VID max at 1.5V+, and most of the i9s have 1.4v+ with the best of the best (4 out of the hundred) being 1.398). That alone will make intel into a huge lawsuit, that is plain violating in false marketing and only "the tech savys" will know and fix it.
 
Again, you're presuming the failure rate is more like a linear slope and not like a cliff. The nature of digital logic actually says it should be more cliff-like.


This is an absurd point, in regards to Puget. Their customers no doubt have the same expectation of reliability no matter what brand of CPU, and are going to initiate a RMA when the system becomes faulty. You can't seriously use this to explain away the difference in their "field" failure rates.
I didn't use point 3 to explain the difference in failure rates. Im trying to explain how we hear more noise about Intel. Cause most CPUs deployed are made by....Intel?
 
No, that is just a random guess and this time around you match what Intel set as default. Just 2.5 decades ago when pentium 4 comes and fall, and the core 2 arrives, the transition and ppl believed that the 4Ghz barrier isn't breakable so the future is <4Ghz multicore, now basically every desktop CPU and their cousin runs above 4Ghz, and back then when Athlon 1Ghz arrives they are mocked to be able to put on a heatsink and fry egg, and tech savy would know back then knows running the core at 70C is very dangerous, now every one runs at that and thought it's cool and safe. Tech evolves and tech savy "knowledge" don't evolve until it have been proven to have to update to the new standard.

And mind you, IF it holds that tech savy like you knows better and Intel just put some way unsafe VID (ref. igor's lab have found i9s VID max at 1.5V+, and most of the i9s have 1.4v+ with the best of the best (4 out of the hundred) being 1.398). That alone will make intel into a huge lawsuit, that is plain violating in false marketing and only "the tech savys" will know and fix it.
It's a random guess? Ok I think you don't understand what a random guess is, clearly.

What does "what people believed back in core 2 duo" have to do with anything? I really have no idea what you are getting at.
 
It's a random guess? Ok I think you don't understand what a random guess is, clearly.

What does "what people believed back in core 2 duo" have to do with anything? I really have no idea what you are getting at.
because you don't design the chip, you don't know for sure what the material, the material science, the way they make the chip and the secret sauce of their chip is, very chip have their design life and ability to sustain whatever usage or current and voltage they can withstand, by definition, the designer SHOULD know better and they have all the tools and tests on hand to torture test them and do projections, others who didn't involve are just guess/speculation.

"what people believed back in core 2 duo" was what "tech savys" common sense, now do you still hold those tech savy "knowledge"? surely not, then why do you assume you should know better than Intel themselves?
 
  • Like
Reactions: bit_user
In my opinion, between 2 companies that produces bad products, it all boils down to how they treat their customers. Whether its AMD or Intel, there is bound to be bad products at some point in time. But for Intel, they tried to sweep the issue under the carpet for way too long, stonewall their customers' RMA request when they knew there's an issue, demonstrated poor competencies in resolving the issue by blaming everyone but themselves on the issue and still unable to resolve the problem after multiple performance regressing fixes. Hence, they left the issue to morph into something that is inviting too much attention and cannot be contained now. As an Intel user, as I read on about this issue over the months, it just looks like a slow car crash.
 
because you don't design the chip, you don't know for sure what the material, the material science, the way they make the chip and the secret sauce of their chip is, very chip have their design life and ability to sustain whatever usage or current and voltage they can withstand, by definition, the designer SHOULD know better and they have all the tools and tests on hand to torture test them and do projections, others who didn't involve are just guess/speculation.

"what people believed back in core 2 duo" was what "tech savys" common sense, now do you still hold those tech savy "knowledge"? surely not, then why do you assume you should know better than Intel themselves?
A random guess is a guess based on 0 empirical data. Obviously i'm not making a random guess or you don't know what random means.

I don't know who believe what back in those tech savys days, if anyone said we will never break 4ghz he was silly. I'm not suggesting we will never break the 6ghz barrier, im saying we are not there yet.

It's quite simple really. CPUs having a different all core boost compared to their ST boost, then yeah, we are not there yet for the ST boost. Obviously it's pushing too high voltage that is only viable at very low amperages, that's why it can't be sustained at all core.
 
A random guess is a guess based on 0 empirical data. Obviously i'm not making a random guess or you don't know what random means.

I don't know who believe what back in those tech savys days, if anyone said we will never break 4ghz he was silly. I'm not suggesting we will never break the 6ghz barrier, im saying we are not there yet.

It's quite simple really. CPUs having a different all core boost compared to their ST boost, then yeah, we are not there yet for the ST boost. Obviously it's pushing too high voltage that is only viable at very low amperages, that's why it can't be sustained at all core.
So when do you know we are at the point of say, all cores 5.7 and single core 6fhz 8 hrs a day usage? or when 1.45v will be safe? it's when it was released, guaranteed by a big tech giants and become a product, it could be Apple, AMD, IBM, Intel or whoever else. they design the chip and sell it at a cost, guaranteed a certain level of performance and reliability at what they built inside the chip. Any "tech savy" should not know better as we do not choose or know the detail design. material used and the subsequent longevity test data the tech giants themselves have, so why can you insist that before all those exposed bad press, we should know the intel defaults are unsafe? at least the i9 have VID tabales above 1.4v except the few golden samples riding on that VID. And if you don't know about all those empirical data for the specific generation (RPL in here), you have 0 data on if it is safe to run, and hence guessing.

Step backwards and assume you are right, so you are now suggesting Intel is not putting out defective product, but outright scammed all those who purchased a CPU with VID higher than 1.4V and SC boost at 6ghz coz every "tech savy and their mother should know it is unsafe to run at those voltage and speed"? this is interesting
 
In my opinion, between 2 companies that produces bad products, it all boils down to how they treat their customers. Whether its AMD or Intel, there is bound to be bad products at some point in time. But for Intel, they tried to sweep the issue under the carpet for way too long, stonewall their customers' RMA request when they knew there's an issue, demonstrated poor competencies in resolving the issue by blaming everyone but themselves on the issue and still unable to resolve the problem after multiple performance regressing fixes. Hence, they left the issue to morph into something that is inviting too much attention and cannot be contained now. As an Intel user, as I read on about this issue over the months, it just looks like a slow car crash.
can't agree more, and worse still they are still trying to fix it without a definitive, guaranteed fix on hand.

It looks way too much like they are trying to delay it until their next gen is out, so the frustration from old gen customers can just fade out
 
So when do you know we are at the point of say, all cores 5.7 and single core 6fhz 8 hrs a day usage? or when 1.45v will be safe? it's when it was released, guaranteed by a big tech giants and become a product, it could be Apple, AMD, IBM, Intel or whoever else. they design the chip and sell it at a cost, guaranteed a certain level of performance and reliability at what they built inside the chip. Any "tech savy" should not know better as we do not choose or know the detail design. material used and the subsequent longevity test data the tech giants themselves have, so why can you insist that before all those exposed bad press, we should know the intel defaults are unsafe? at least the i9 have VID tabales above 1.4v except the few golden samples riding on that VID. And if you don't know about all those empirical data for the specific generation (RPL in here), you have 0 data on if it is safe to run, and hence guessing.
You are pulling a common thought error. You are equating "knowing" with 100% certainty. All "knowing" really means is "I really really believe".

When we do know? Well again, when a CPU ships with ST boosts that are different to the all core boosts, that should be a very very red flag that the ST boosts are treading towards dangerous - or even beyond that. It's not a safe limit to run it at cause if it was, it would be able to achieve it in all core workloads as well, not just in very low amperage situations. On top of that, TVB boosts, those are also dangerous, by the very definition of them, they are literally on the edge of stability. Having to keep temperatures below a certain point for them to even work is a huge red flag.

That's why when I test my rigs stability I always do that with the CPU running at 90+ degrees. I don't want stability to depend on keeping the temperatures low.

It's not just black and white. Again, you are falsely equating knowing with 100% certainty, and as long as I don't have 100% certainty then that just means im guessing. That is false. There is white, there is black and then there is gray. I'm making a very educated guess.

Step backwards and assume you are right, so you are now suggesting Intel is not putting out defective product, but outright scammed all those who purchased a CPU with VID higher than 1.4V and SC boost at 6ghz coz every "tech savy and their mother should know it is unsafe to run at those voltage and speed"? this is interesting
Yeap, if Intel was the one asking mobo makers to remove all power limits, amp limits, use TVB and all that crap, and then asked reviewers to run their reviews like that then yeah, Intel settings are scummy.
 
You are pulling a common thought error. You are equating "knowing" with 100% certainty. All "knowing" really means is "I really really believe".

When we do know? Well again, when a CPU ships with ST boosts that are different to the all core boosts, that should be a very very red flag that the ST boosts are treading towards dangerous - or even beyond that. It's not a safe limit to run it at cause if it was, it would be able to achieve it in all core workloads as well, not just in very low amperage situations. On top of that, TVB boosts, those are also dangerous, by the very definition of them, they are literally on the edge of stability. Having to keep temperatures below a certain point for them to even work is a huge red flag.

That's why when I test my rigs stability I always do that with the CPU running at 90+ degrees. I don't want stability to depend on keep the temperatures low.

It's not just black and white. Again, you are falsely equating knowing with 100% certainty, and as long as I don't have 100% certainty then that just means im guessing. That is false. There is white, there is black and then there is gray. I'm making a very educated guess.
You do know that it is still a guess? TVB are released since 2018, and it didn't hinder any longevity, for thermals this is what the intel rep publicly said:

View: https://www.youtube.com/watch?v=ljZt_TQegHE&t=972s


So why should we not trust the Intel rep back then but trust our own, "educated" guess? ppl buys intel because they trusted intel, not you nor any tech savy. And assume this time you are right, so why intel shouldn't recall RPL again?

No tech savy can write the microcode nor the VID table inside the intel chips, which, if not undervolted manually, calls for 1.4v+ Vcore for most i9s to say at least.

Up till now they admit that it is not only the power limits set by the board vendors, but something in their VID table and microcode is faulty to begin with, so why on earth intel is still a sign of stability when their factory stuffs need tech savys or system builders to try and find that detail very conservative setting out of the 3 published by themsave them, to just.... survive and stable?
 
You do know that it is still a guess? TVB are released since 2018, and it didn't hinder any longevity, for thermals this is what the intel rep publicly said:
The sun will rise from the East tomorrow. If that's a guess to you then sure, my claim is also a guess.

So why should we not trust the Intel rep back then but trust our own, "educated" guess? ppl buys intel because they trusted intel, not you nor any tech savy. And assume this time you are right, so why intel shouldn't recall RPL again?
Because Intel is trying to sell you something. You shouldn't trust intel, nvidia, amd, apple, or any other company for that matter. They are all scummy and trying to get as much of your money as possible with as little effort as possible. That's their number 1 goal. If you want to trust them, go ahead, don't let me stop you.
No tech savy can write the microcode nor the VID table inside the intel chips, which, if not undervolted manually, calls for 1.4v+ Vcore for most i9s to say at least.

Up till now they admit that it is not only the power limits set by the board vendors, but something in their VID table and microcode is faulty to begin with, so why on earth intel is still a sign of stability when their factory stuffs need tech savys or system builders to try and find that detail very conservative setting out of the 3 published by themsave them, to just.... survive and stable?
Then intel defaults are very well known, just because techtubers pretend they are not for clicks doesn't make it so. 307a / 253w pl 2 / 56 TAU are the intel defaults. If you use those settings most likely you won't have an issue. At least not at an alarming rate. Puget's graph shows that as well btw.
 
But neither does AMD, since even with this low performance for the 7950x it still fails at over twice the rate of Intel, right?
the chips failing were failing in the lab before sales to clients. look at those numbers again.

and i think where most of that comes from is the early problems with AM5, remember how ASUS mbs were setting the 7000 chips on fire?

I suspect those numbers a heavily influenced by those early mb/chipset problems.
 
  • Like
Reactions: stuff and nonesense
the chips failing were failing in the lab before sales to clients. look at those numbers again.

and i think where most of that comes from is the early problems with AM5, remember how ASUS mbs were setting the 7000 chips on fire?

I suspect those numbers a heavily influenced by those early mb/chipset problems.
Yes, they fail in the lab. No, it's not because of the asus motherboard, puget doesn't use XMP and doesn't use mobo defaults.

Failing in the lab is a sign of terrible QC, not good one. Shipping CPUs that will fail within a day or two is worse than shipping cpus that will fail in a year or two, no?

Company A) 1% of their CPUs will fail in a day or two, and another 1% will fail in the period of a couple of years.

Company B) 4% of their CPUs will fail in a day or two, and another 0.3% will fail in the period of a couple of years.


I cannot fathom how company B has better quality control / lower failure rate or whatever you wanna call it.
 
Because Intel is trying to sell you something. You shouldn't trust intel, nvidia, amd, apple, or any other company for that matter. They are all scummy and trying to get as much of your money as possible with as little effort as possible. That's their number 1 goal. If you want to trust them, go ahead, don't let me stop you.
Ok, then we finally agrees to something now, and that is intel IS scamming in the sustainable specs built into their chips's VID table and microcode, case closed and they should be punished for selling out of the box unstable/unsustainable stuffs. I think false marketing is illegal on at least USA and EU isn't it?
 
Hi folks,

I am the author mentioned in the latest Gamer's Nexus video.

Matt and I already discussed the misunderstanding regarding his removed comment, and I had considered the matter settled. I was caught off guard when I saw how Steve took the incident out of context and misrepresented my actions. Anyone who thinks I am censoring criticism on /r/Intel is mistaken - and at the very least, hasn't actually visisted or looked at the subreddit lately. If they had, they'd know /r/Intel has been on fire with critical threads in the past few months as Intel has utterly fumbled their response to the instability in their CPUs.

In regards to conflicts of interest - after the pitchforks have settled, if y'all truly believe that my position as a moderator is a conflict of interest I am not opposed to stepping down. I have no interest in moderating a community which doesn't want me around.

But I think if you look at my Reddit history, you'll realize it would be a loss for the subreddits that I am an active moderator in. I am very "libertarian" with my approach to moderation and free speech, and if I left I'd likely be replaced with someone with a less laissez faire approach to moderating.
Regarding this, on /r intel, out of the 15 top popular topics, 10 are negative towards Intel. Now that doesn't speak on it's own about objectivity, we have to have comparative numbers.

So I checked /r amd, we got a 0 out of 50.

I think you are doing fine, keep at it.
 
  • Like
Reactions: Albert.Thomas
Ok, then we finally agrees to something now, and that is intel IS scamming in the sustainable specs built into their chips's VID table and microcode, case closed and they should be punished for selling out of the box unstable/unsustainable stuffs. I think false marketing is illegal on at least USA and EU isn't it?
Again, I don't know the details to tell you who is scumming. Because again, what I understand to be Intel's "default" specs are fine. What motherboards are running as default are not fine. What guidances intel gives to reviewers i don't know. If intel is telling reviewers to remove all power and amp limits then yeah, that is scummy advertising.

The major problem I see with intel is that they are pushing the K chip as the default option, when that should be the special overclockers edition. It makes no sense regardless of stability for your "default" chip to be the one that boosts to 250w - even if mobos kept those default power limits. I have no freaking clue why intel does this, I don't think it helps their sales or anything since most reviews are negatively covering the power draw, I really don't get it. They should be sending the non k chips to reviewers and advertise those as the defaults.
 
Again, I don't know the details to tell you who is scumming. Because again, what I understand to be Intel's "default" specs are fine. What motherboards are running as default are not fine. What guidances intel gives to reviewers i don't know. If intel is telling reviewers to remove all power and amp limits then yeah, that is scummy advertising.

The major problem I see with intel is that they are pushing the K chip as the default option, when that should be the special overclockers edition. It makes no sense regardless of stability for your "default" chip to be the one that boosts to 250w - even if mobos kept those default power limits. I have no freaking clue why intel does this, I don't think it helps their sales or anything since most reviews are negatively covering the power draw, I really don't get it. They should be sending the non k chips to reviewers and advertise those as the defaults.
Again, it is now believed that the voltage is the issue as much as if not more than the power limit, where voltage request is, built in the VID table and microcode of the Intel CPU itself, which, doesn't matter what MB vendors do, that's why the 65W parts above are believed to be affected. Also, K versions are simply unlocked versions, which you can FURTHER tinker above the default settings of whatever Intel is making, which, the default of the K series should be conservative enough to be stable and long lasting, further OC is user's fault while if user need to further downclock/volt/limit it, it is intel's fault and false advertise. I don't see why one can still consider it's the user/MB vendor's fault but not Intel's at this point
 
Again, it is now believed that the voltage is the issue as much as if not more than the power limit, where voltage request is, built in the VID table and microcode of the Intel CPU itself, which, doesn't matter what MB vendors do, that's why the 65W parts above are believed to be affected. Also, K versions are simply unlocked versions, which you can FURTHER tinker above the default settings of whatever Intel is making, which, the default of the K series should be conservative enough to be stable and long lasting, further OC is user's fault while if user need to further downclock/volt/limit it, it is intel's fault and false advertise. I don't see why one can still consider it's the user/MB vendor's fault but not Intel's at this point
I didn't say it's mb fault. I literally said I don't know.

Does Intel force reviewers to use no power limits / amp limits etc.? If yes, then clearly Intel's fault. If not, then clearly reviewers fault

Does Intel force mobo to ship with no power limits / amp limits etc.? If yes, see above.

But let me remind you that mb were pulling the same crap with am4 . Even though AMD was quite clear on what the default power limits are, a lot if not all mobo manafacturers cheated by missreporting power draw to the CPU using an offset on their SVID interface. It became so widespread that hwinfo updated their software and added a tab that allowed you to read how much off the actual power draw is. It's called power reporting deviation.

Basically mobo was feeding wrong data to the CPU making it think it's drawing less power than it actually is in order for it to turbo boost higher, lol.
 
Historically, Intel and AMD CPUs generally have dispalyed failure rates below 1% within the first year. Some estimates put these rates as low as 0.1% to 0.5%.This applies to large vendors such as Dell as well as to specialized integrators such as Puget Systems.

Now, these new numbers from Puget showing a recent surge of CPU failure rates between 2-7% are extremely alarming., especially as they are already tuning their systems fairly conservatiely.
That being said, it is safe to assume that failure rates from Dell, HP or Lenovo should be sigificantly lower than even those reported by Puget, considering that millions of customers would have been affected so far and there would be a much larger uproar in the public than we have seen so far..

So here is a conspiracy theory: What if Intel knew very well what's going with their CPUs and quietly told their large partners (such as Dell) to massively downregulate BIOS-/voltage-/PL-state-/RAM etc other settings. in their systems, particualry in their business and workstation desktop systems?
 
I didn't say it's mb fault. I literally said I don't know.

Does Intel force reviewers to use no power limits / amp limits etc.? If yes, then clearly Intel's fault. If not, then clearly reviewers fault

Does Intel force mobo to ship with no power limits / amp limits etc.? If yes, see above.

But let me remind you that mb were pulling the same crap with am4 . Even though AMD was quite clear on what the default power limits are, a lot if not all mobo manafacturers cheated by missreporting power draw to the CPU using an offset on their SVID interface. It became so widespread that hwinfo updated their software and added a tab that allowed you to read how much off the actual power draw is. It's called power reporting deviation.

Basically mobo was feeding wrong data to the CPU making it think it's drawing less power than it actually is in order for it to turbo boost higher, lol.
https://community.intel.com/t5/Proc...ty-Reports-on-Intel-Core-13th-and/m-p/1617113

"Based on extensive analysis of Intel Core 13th/14th Gen desktop processors returned to us due to instability issues, we have determined that elevated operating voltage is causing instability issues in some 13th/14th Gen desktop processors. Our analysis of returned processors confirms that the elevated operating voltage is stemming from a microcode algorithm resulting in incorrect voltage requests to the processor."

This, is from Intel themselves, if it is normal english, the analysis confirmed that it is the microcode issue pumping excessive voltage, so no matter the MB did to stick to intel performance or extreme profile or not, they will be boosting at incorrect voltage, so degradation will show up also but maybe just a bit later and on more heavy workload, that's it. It still, is defective and Intel's problem. even with say 253W profiles, in normal gaming the CPU don't draw nearly as much power as it does, say when I do all core load for flight simming, the power draw is ~90-120W at most, so unless all those complaints comes from ppl running Cinebench 24/7 for a week, it just don't make sense it is the power limit issue without the VID being at problem.
 
Status
Not open for further replies.