News Puget says its Intel chips failures are lower than Ryzen failures — retailer releases failure rate data, cites conservative power settings

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Gururu

Upstanding
Jan 4, 2024
299
195
370
Long time Intel user with a 13700K and now a 14900K, I have always dropped the vcore down from auto as the motherboards are pushing absurd voltage if left to their stock settings. Always fixed or adaptive with a 1.325v limit.

I cannot believe that there was a 50% failure rate as if so the internet would have blown up a long time ago with way to many 13th and 14th gen enthusiasts running to scream at Intel which is not what happened.
That was reported by Tom’s from a Tweet by a mid-level manager, wait a supervisor, with no data or confirmation by his company.
 

ravewulf

Distinguished
Oct 20, 2008
984
41
19,010
Apparently some people didn't read the parts of the article where caveats apply to this report (conservative BIOS settings, low sample size, etc)
 
Mar 10, 2020
408
375
5,070
Long time Intel user with a 13700K and now a 14900K, I have always dropped the vcore down from auto as the motherboards are pushing absurd voltage if left to their stock settings. Always fixed or adaptive with a 1.325v limit.
Thing is, you shouldn’t have to do so.
Intel should specify, not recommend settings to which the motherboard manufacturers should adhere.
 
  • Like
Reactions: Hotrod2go
+-200 computers per month according to the article, so +2000 over the period 13&14gen have been on the market.

That is a very good sample size to draw conclusions from.
Considering the specific use case, it really isn't. (I used to do failure analysis for a VERY large multinational) You need to be able to compare different applications, different settings and workloads. 2000 units all with similar use cases and settings gives some information, but not enough to be meaningful by itself. Puget is just riding the hype train, whilst promoting their product, and that is fine. But EVERYTHING discussed on these forums, every other forum, Reddit, whatever, is all just meaningless conjecture. It's fun to have a discussion but pretty much any of the "ahem", opinions in these discussions is at best spitballing. None of the armchair experts, despite all their claims, have any real meaningful knowledge about what is, or is not going on here. Until Pat Gelsinger apparates from the aether into Tom's Hardware forums to discuss the issues in person there is very little useful information within. The fact that we're down to posts about journalist integrity regarding Tom's writers also being Reddit moderators should tell everyone that it's time to put the topic to bed. I know it won't be, but nobody should be expecting any more revelations here.
 

LolaGT

Reputable
Oct 31, 2020
284
258
5,090
We already knew this was a significant chunk of the problem. A big part of the blame lies on this, and techtubers and various review websites, that encouraged by posting numbers users thought they needed to match, and run their high end k CPU at peak endlessly. CPUs that people probably had no business owning if they just dropped it in an enthusiast MB and let it eat.
Sure intel reaped a bit of that glory as well, but anyone who didn't know what that k or ks designation meant should never have bought one of them.
FWIW Intel was always up front on those, they are for enthusiasts and when you buy one there was always the caveat that you were assuming some risks if you were going to run it hard, especially if you don't know what you are doing tweaking a modern bios that lets you tinker with pretty much every last parameter.
aggressive BIOS power settings set by motherboard manufacturers could be one of the contributing factors to Intel's failures, likely either exacerbating or accelerating chip degradation.
 
Considering the specific use case, it really isn't. (I used to do failure analysis for a VERY large multinational) You need to be able to compare different applications, different settings and workloads. 2000 units all with similar use cases and settings gives some information, but not enough to be meaningful by itself. Puget is just riding the hype train, whilst promoting their product, and that is fine. But EVERYTHING discussed on these forums, every other forum, Reddit, whatever, is all just meaningless conjecture. It's fun to have a discussion but pretty much any of the "ahem", opinions in these discussions is at best spitballing. None of the armchair experts, despite all their claims, have any real meaningful knowledge about what is, or is not going on here. Until Pat Gelsinger apparates from the aether into Tom's Hardware forums to discuss the issues in person there is very little useful information within. The fact that we're down to posts about journalist integrity regarding Tom's writers also being Reddit moderators should tell everyone that it's time to put the topic to bed. I know it won't be, but nobody should be expecting any more revelations here.
I farted.
Does that count?
 

SunMaster

Commendable
Apr 19, 2022
214
193
1,760
We already knew this was a significant chunk of the problem. A big part of the blame lies on this, and techtubers and various review websites, that encouraged by posting numbers users thought they needed to match, and run their high end k CPU at peak endlessly. CPUs that people probably had no business owning if they just dropped it in an enthusiast MB and let it eat.
Sure intel reaped a bit of that glory as well, but anyone who didn't know what that k or ks designation meant should never have bought one of them.
FWIW Intel was always up front on those, they are for enthusiasts and when you buy one there was always the caveat that you were assuming some risks if you were going to run it hard, especially if you don't know what you are doing tweaking a modern bios that lets you tinker with pretty much every last parameter.
Oh - you're pretending it only applies to the K models?
 

Gururu

Upstanding
Jan 4, 2024
299
195
370
Considering the specific use case, it really isn't. (I used to do failure analysis for a VERY large multinational) You need to be able to compare different applications, different settings and workloads. 2000 units all with similar use cases and settings gives some information, but not enough to be meaningful by itself. Puget is just riding the hype train, whilst promoting their product, and that is fine. But EVERYTHING discussed on these forums, every other forum, Reddit, whatever, is all just meaningless conjecture. It's fun to have a discussion but pretty much any of the "ahem", opinions in these discussions is at best spitballing. None of the armchair experts, despite all their claims, have any real meaningful knowledge about what is, or is not going on here. Until Pat Gelsinger apparates from the aether into Tom's Hardware forums to discuss the issues in person there is very little useful information within. The fact that we're down to posts about journalist integrity regarding Tom's writers also being Reddit moderators should tell everyone that it's time to put the topic to bed. I know it won't be, but nobody should be expecting any more revelations here.
We can’t put anything to bed because OC has a dual meaning.
 
  • Like
Reactions: Hotrod2go

MobileJAD

Prominent
May 15, 2023
21
12
515
If Puget's statistics are true, then where's the AMD customer posts and outrage then.

Since 1995, I've never had an AMD or Intel CPU fail on me. Had a few GPUs, hard drives and PSUs break though.
Honestly same, every AMD and Intel CPU has just worked, no matter how long ive owned it. The only issue I ever had was if the CPU used pins, my clusmy ass might have bent or broken a pin. But ive never had a CPU die on me.
However, ive had tons of motherboards and PSU unjts fail on me. My GPUs have been durable, but not immune to failure. Ive had storage devices plenty stop working.
But never a CPU.
 

Mattzun

Reputable
Oct 7, 2021
101
155
4,760
A 50 percent failure rate on CPUs at a company that is using the cpu 24/7 in a way that appears to trigger the error is believable.

Three months of use in that scenario could cause the same degradation as two years of normal use.
Running those servers for a week with enhanced logging could catch as many errors as a normal user would see in a year.

All core loads like those seen in many workstations or cinebench don’t seem to cause the high voltages associated with rapid degradation. I’m more surprised by the recent spike in 14th gen issues than I am that Puget hasn’t had huge issues with i9s.
 
Who do we really blame here?
Intel or the board manufacturers?
Intel allowed the board makers to allow unlimited power and boost as their default settings.
Unknowing buyers who purchased boards with "overclock" setting now have unstable/failing systems.
Dont get me wrong.
If I allow my 5600x unlimited power and boost it performs much better in benchmarks and games.
But 82-83watts with peaks reaching 93watts would probably kill it pretty fast if loaded heavy very often.
Who would of thought that possible? :??:
 
Aug 3, 2024
11
4
15
100% be sure these statistics is lie or joke not accure



intel super overclock its hot cpu , then how cool amd (not overclock) has failture? they run system wrong like with 1 stick ram (must 2stick each dual channel) , then count it failture !!!!



why they dont publish these before?

why we cant see complaint about amd or video?

i am sure like always intel pay money to them to create this joke statistics
 
Last edited:

Hotrod2go

Prominent
Jun 12, 2023
217
59
660
I would like to see some numbers from Acer, Dell, HP, Lenova and other large PC vendors.
Regardless of the numbers, Intel has damaged their reputation greatly.
I agree, a broader set of data would clear up a lot of speculation & guess work about who exactly is to blame here. But Intel has dirt on them already.
 
Only new cpu has fail on my hand Its is the 5700G freezes on adobe photoshop.
AMD RMA need two months to give my money back
only rma with intel is for get a golden sample cpu :D
graphics aways die =)
 

ravewulf

Distinguished
Oct 20, 2008
984
41
19,010
+-200 computers per month according to the article, so +2000 over the period 13&14gen have been on the market.

That is a very good sample size to draw conclusions from.

That's peanuts compared to larger vendors and isn't anywhere near enough to be representative of the wider issue. Puget themselves specifically call out this point:

To determine the scale of the findings, we asked Puget about the number of systems the PC builder has shipped. “We don’t do a ton of volume [...]"
 
  • Like
Reactions: jlake3

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Everybody knows Intel pays Adobe so they can come out ahead there even when they’re losing in virtually every other workload. There’s no other valid reason that literally all Adobe software heavily favors Intel hardware compared to literally the rest of the software in the world.
Haha, sure man.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
So, 2000+ computers are not enough to draw conclusions cause amd has a worse failure rate. But alderon games which probably uses 2-3-5-10 computers for their servers is obviously a high enough sample size cause they claimed intel has a 100% failure rate.

Yay, as usual, the amd brigade.
 
  • Like
Reactions: LolaGT

bit_user

Titan
Ambassador
Everybody knows Intel pays Adobe so they can come out ahead there even when they’re losing in virtually every other workload. There’s no other valid reason that literally all Adobe software heavily favors Intel hardware compared to literally the rest of the software in the world.
The Adobe apps they highlighted are lightly-threaded, where Raptor Lake tends to have an advantage. This can be seen in the relatively small performance discrepancy between CPUs with wildly different core counts.

Also, in the text below Adobe Premier and Davinci Resolve, they credit the iGPU's compression engine (i.e. "Quick Sync technology") for much of those wins:

"For Premiere Pro and Resolve in particular, Intel CPUs with Quick Sync are hard to beat for longGOP codecs like H.264 and HEVC"
 

bit_user

Titan
Ambassador
So, 2000+ computers are not enough to draw conclusions cause amd has a worse failure rate.
It's not the gross number, but rather the granularity of the data. We can't tell whether Ryzen 7000 only had a high "shop" failure rate at the beginning, or if it's been relatively constant, throughout. Unlike with Intel, Puget didn't provide any timeline data for AMD.

Hopefully, Paul will get them to provide this data.
 
  • Like
Reactions: slightnitpick

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
It's not the gross number, but rather the granularity of the data. We can't tell whether Ryzen 7000 only had a high "shop" failure rate at the beginning, or if it's been relatively constant, throughout. Unlike with Intel, Puget didn't provide any timeline data for AMD.

Hopefully, Paul will get them to provide this data.
By beginning, what do you mean, that the vast majority of that 4-5% failure rate was all in the first few batches? I don't think that that's very likely, cause what would be the failure rate on those batches, like 50%? DIY builders would have experienced similar failure rates, and that never happened

Still, the point is, 2000 computer > 10 computers but people poop on the first and accept the 2nd. We all know the reasons why that is of course. We've already learned that puget is lying and adobe is on purpose nerfing performance on AMD so yeah, sure.
 

YSCCC

Commendable
Dec 10, 2022
563
457
1,260
You are right, you need to be a semiconductor engineer to realize 400 watts and 1.4+.volts ain't safe for 24/7. My bad.
You need to be a semiconductor engineer (and a really good one) to refute what intel said is safe and put into their marketed profiles as unsafe IMO, tech savy or geeks only guess by experience, and Intel designed and made their chips, by no means we should assume we know better. Only this time round they've put out something defective.

Its clear that most of the big OEMs have the same strategy of using conservative settings to avoid support costs.

I wonder if Intel has known that they were pushing things to levels that weren't stable in the long term but didn't do anything about it because it wouldn't really affect Dell etc.

Dell's standard $1800 14700 system has a single 32GB stick of DDR5 RAM.
This is just insane given that the 2x16GB config is a zero dollar option and it should be 10 percent faster.
I suspect that the systems are more stable with a single stick of RAM

I'd also suspect that Dell's BIOS also has conservative settings and the user can't even pick many of the options that MB vendors have that could increase support costs.
It could be, or just another method to make you upgrade to 2x32 Gb at a hefty cost. for OEMs, being slower or less customisable isn't important as their target group more than likely don't know or understand how to even test it. just make it work and cheaper is the main goal
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Everybody knows Intel pays Adobe so they can come out ahead there even when they’re losing in virtually every other workload. There’s no other valid reason that literally all Adobe software heavily favors Intel hardware compared to literally the rest of the software in the world.
Out of curiosity - I gave you the benefit of the doubt. Went ahead and checked TPU review, intel wins in 25 workloads excluding the adobe ones. From video to music encoding, databases, microsoft apps, some AI stuff. So clearly, you are blatantly wrong, it's not just adobe that favors Intel
For me the benchmarks of Puget have something strange.
On the review of 14900KS they said something like this : "In heavily multi-threaded workloads like Cinebench, Unreal Engine, V-Ray, and Blender, the 14900KS is a much more significant 6-17% faster than the 14900K. It does especially well in the multi-core rendering benchmarks, averaging a 15% performance increase."
How can be a 14900KS 15% better than a 14900K in MT rendering ?
At ISO power (since puget is manually setting Intel defaults) it's not unreasonable. I've said that before, those better binned chips might look inefficient on reviews that run with no power limits, but they are in fact more efficient when you run them with some semblance of sanity. 5 to 15% means one CPU is running at 4.5 ghz and the other one at 4.8 to 5.2ghz. Completely within reason. If you put both the K and the KS at the same power, the KS is faster.
 
  • Like
Reactions: slightnitpick

bit_user

Titan
Ambassador
By beginning, what do you mean, that the vast majority of that 4-5% failure rate was all in the first few batches? I don't think that that's very likely, cause what would be the failure rate on those batches, like 50%?
It depends on what time range you're assuming and how biased their data was to that time range. If it happened mostly in the first year of Ryzen 7000 and that's when they got the majority of their orders for those systems, then the shop failure rate could've been similar to Intel's 11th gen. So, it really depends on several unknowns and it's dangerous to make assumptions without that data.

Still, the point is, 2000 computer > 10 computers but people poop on the first and accept the 2nd.
Because the usage pattern is different. You just steamroller over this point, but you really shouldn't. The Alderon Games data represents an accelerated degradation timetable relative to what most other people are doing with Raptor Lake. It effectively gives us the possibility to glimpse the future.

Also, you haven't provided an explanation for why Alderon Games' data should be disregarded. They said they had received numerous warranty replacement CPUs from Intel, so we can't simply attribute their failures getting a bad batch of CPUs in just 1 or 2 orders.
 
Status
Not open for further replies.