News Game publisher claims 100% crash rate with Intel CPUs – Alderon Games says company sells defective 13th and 14th gen chips

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

35below0

Respectable
Jan 3, 2024
1,727
744
2,090
Well, it replaced a machine that I used for 10 years (Sandybridge i7-2600K - and no, I didn't get it new). I'm not planning on getting 10 years out of this one, but I wasn't planning on keeping the old one for that long, either.
(y)
I didn't think the old machine would need more that 8 GB, but then I upgraded it to 16 GB about 4 years ago and sure glad I did!
I well believe it.
With usage as high as 5-8Gb when doing almost nothing (Windows 10/11), AND with whatever the iGPU needs to drive graphics, 16Gb is very, very low today.
This time, I got 64 GB. Why??? Because it doesn't support memory overclocking and I'm using its iGPU, where memory speed is actually relevant (and also, less bandwidth is left over for the cores). The only way to increase performance beyond baseline DDR5-4800 is to use lower CL RAM and dual-rank memory. With DDR5, the smallest dual-rank DIMMs are 32 GB and you'd better run a pair of them to fill out the full 128-bit datapath!
I was pointing the finger at myself. I got 64Gb of Crucial baseline DDR5 because it was a good price and a lot of RAM. Turns out i very much don't need more than ~20Gb, if that.

That's a very good point about dual-rank DIMMs though! (y)
The H0's die has a smaller ring bus for lower latency and better efficiency. Cooling is a little harder, but still not bad since it doesn't clock very high. These were among the reasons I didn't step up to a bigger die.
Another good point. You really made an effort to tailor the build to your needs and with an eye to getting the most out of it. Well done.
 
  • Like
Reactions: bit_user

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
501
2,060
Sorry, I expressed badly, my take is that Intel sell factory overclocked processor.
Intel pressed by competitor did a step too long.
You realize intel sells locked cpus all the way down to 35w right? Do you realize how silly it sounds to focus on the k parts when you supposedly don't want "overclocked" parts? It feels like you are going out of your way to find something to complain.
 

NinoPino

Respectable
May 26, 2022
484
301
2,060
The only major chip bug I could think of, within the past 15 years or so, was the Phenom TLB (?) bug, I think. AMD released a workaround that cost some performance, I've read.
Yes, it was in the early Phenom cores. AMD shipped a firmware update that allowed one to shut down that feature and "solve" the bug, but it cost a lot of performance. Later revisions included a hardware fix that reduced performance by a couple percents.
I guess Haswell's TSX/HLE was another example, but since that was a new ISA extension, it's not like you'd lose that much by it being disabled. It was rather more annoying when microcode updates started rolling out that quietly disabled it in Skylake and newer CPUs, since those processors were shipping for years with it enabled. In this case, it worked as advertised, but got disabled due to side-channel attack vulnerabilities it introduced.
I personally completely sidestepped that one, but I did read about it. It made me worried about getting a Haswell processor, but considering the competition, it was either that or keep my Athlon II X4 for 4 more years - yeah, no.
Yeah, mitigations were (functionally-speaking) like a chance for Intel and AMD to slow down their old CPUs, in order to help keep the upgrade treadmill running. On machines where I don't use a web browser or run anything other than carefully-vetted code, I disable all mitigations.
On Intel processors, that's indeed a killer - minus 20% performance if not more. AMD processors that retain a level of performance that makes them still usable though ? It seems the impact was lower - but no one really bothered to benchmark those until recently. It might be interesting, actually - I'll see if I can put one together and run some tests ! Could be fun.
 
  • Like
Reactions: bit_user

tamalero

Distinguished
Oct 25, 2006
1,226
241
19,670
Huh. Source?

The only major chip bug I could think of, within the past 15 years or so, was the Phenom TLB (?) bug, I think. AMD released a workaround that cost some performance, I've read.

I guess Haswell's TSX/HLE was another example, but since that was a new ISA extension, it's not like you'd lose that much by it being disabled. It was rather more annoying when microcode updates started rolling out that quietly disabled it in Skylake and newer CPUs, since those processors were shipping for years with it enabled. In this case, it worked as advertised, but got disabled due to side-channel attack vulnerabilities it introduced.


Yeah, mitigations were (functionally-speaking) like a chance for Intel and AMD to slow down their old CPUs, in order to help keep the upgrade treadmill running. On machines where I don't use a web browser or run anything other than carefully-vetted code, I disable all mitigations.
https://en.wikipedia.org/wiki/Pentium_FDIV_bug

https://hardware.slashdot.org/story/14/10/10/193217/where-intel-processors-fail-at-math-again

https://www.tomshardware.com/news/sandy-bridge-sata-error-pentium-fdiv-bug,12115.html

And yes, I remember AMD had a few as well.
 

bit_user

Titan
Ambassador
LOL, but you said "Pentium 4 that did always had issues with some calculations, giving incorrect results too." That wasn't Pentium 4, but rather the original Pentium that launched about a decade earlier!

Okay, that is genuinely interesting. Unfortunately, for all the detail the linked blog post goes into, I don't see any CPU model numbers listed.

One thing that's interesting about CPU implementations of transcendental functions is that they're always approximations. The issue was just that they're not as accurate as claimed. As the author put it:

"I was expecting 33 digits of pi and I only got 21."

Depending on what level of accuracy you're expecting, that might not be a problem. It's still many orders of magnitude better than the FDIV accuracy, and division is a much more fundamental operation.

Last point about that post: SSE doesn't even have a hardware implementation! That blog post is dated Oct. 2014, which is about 15 years after SSE got introduced. By the time Microsoft defined the Windows 64-bit ABI, they had deprecated x87 arithmetic (there was a rumor they weren't even going to save the x87 registers, but this turned out to be false). Even for scalar arithmetic, SSE is so much faster that most software doesn't even use x87 (unless you're doing scientific calculations, for which you probably wouldn't have been using FSIN anyway).

Citing this tells me you didn't even bother to read it, because they're talking about a motherboard chipset bug - and it had nothing to do with numerical accuracy. Please don't cite sources, unless you can be bothered to take the time and make sure they actually support your case.

Anyway, thanks for cluing me into that FSIN accuracy issue. As I mentioned, it didn't affect most of us because FSIN was rarely used, by that point, and probably not by anyone who really needed scientific-grade accuracy. That's certainly why it didn't make bigger waves. However, it's an interesting bit of trivia, to say the least.
 

bit_user

Titan
Ambassador
Isnt turbo boost and similars pretty much factory overclocked and is featured in almost every single one of the ranges? Even the locked chips?
Overclocking is when you run a CPU above the officially-supported frequency. By definition, overclocking isn't guaranteed to work. Furthermore, if you overclock an Intel CPU, it will void the warranty. So, by that definition, this isn't overclocking.

Also, the CPU's built-in turbo boosting features are never supposed to run the CPU at a level where it malfunctions. This is another way turbo boost is meaningfully different than overclocking.
 
Mar 10, 2020
420
384
5,070
LOL, but you said "Pentium 4 that did always had issues with some calculations, giving incorrect results too." That wasn't Pentium 4, but rather the original Pentium that launched about a decade earlier!


Okay, that is genuinely interesting. Unfortunately, for all the detail the linked blog post goes into, I don't see any CPU model numbers listed.

One thing that's interesting about CPU implementations of transcendental functions is that they're always approximations. The issue was just that they're not as accurate as claimed. As the author put it:
"I was expecting 33 digits of pi and I only got 21."​

Depending on what level of accuracy you're expecting, that might not be a problem. It's still many orders of magnitude better than the FDIV accuracy, and division is a much more fundamental operation.

Last point about that post: SSE doesn't even have a hardware implementation! That blog post is dated Oct. 2014, which is about 15 years after SSE got introduced. By the time Microsoft defined the Windows 64-bit ABI, they had deprecated x87 arithmetic (there was a rumor they weren't even going to save the x87 registers, but this turned out to be false). Even for scalar arithmetic, SSE is so much faster that most software doesn't even use x87 (unless you're doing scientific calculations, for which you probably wouldn't have been using FSIN anyway).


Citing this tells me you didn't even bother to read it, because they're talking about a motherboard chipset bug - and it had nothing to do with numerical accuracy. Please don't cite sources, unless you can be bothered to take the time and make sure they actually support your case.

Anyway, thanks for cluing me into that FSIN accuracy issue. As I mentioned, it didn't affect most of us because FSIN was rarely used, by that point, and probably not by anyone who really needed scientific-grade accuracy. That's certainly why it didn't make bigger waves. However, it's an interesting bit of trivia, to say the least.
It’s sad to say but all processors have some bugs, testing and validation can only go so far. For operational accuracy the Pentium division bug is probably the most critical I can recall. Put simply it gave incorrect answers. AMD, for performance was royally messed up with the Phenom TLB bug, Phenom 2 was a good chip against Core 2 but was overshadowed by the first i7 processors, then came the huge misstep.

Personally I’m agnostic when it comes to hardware. I buy the best at the time for the cash I have to play with and then I hope that down the line the “security researchers” don’t uncover a devastating error that will negate the advancements made by the hardware.
 

tamalero

Distinguished
Oct 25, 2006
1,226
241
19,670
LOL, but you said "Pentium 4 that did always had issues with some calculations, giving incorrect results too." That wasn't Pentium 4, but rather the original Pentium that launched about a decade earlier!


Okay, that is genuinely interesting. Unfortunately, for all the detail the linked blog post goes into, I don't see any CPU model numbers listed.

One thing that's interesting about CPU implementations of transcendental functions is that they're always approximations. The issue was just that they're not as accurate as claimed. As the author put it:
"I was expecting 33 digits of pi and I only got 21."​

Depending on what level of accuracy you're expecting, that might not be a problem. It's still many orders of magnitude better than the FDIV accuracy, and division is a much more fundamental operation.

Last point about that post: SSE doesn't even have a hardware implementation! That blog post is dated Oct. 2014, which is about 15 years after SSE got introduced. By the time Microsoft defined the Windows 64-bit ABI, they had deprecated x87 arithmetic (there was a rumor they weren't even going to save the x87 registers, but this turned out to be false). Even for scalar arithmetic, SSE is so much faster that most software doesn't even use x87 (unless you're doing scientific calculations, for which you probably wouldn't have been using FSIN anyway).


Citing this tells me you didn't even bother to read it, because they're talking about a motherboard chipset bug - and it had nothing to do with numerical accuracy. Please don't cite sources, unless you can be bothered to take the time and make sure they actually support your case.

Anyway, thanks for cluing me into that FSIN accuracy issue. As I mentioned, it didn't affect most of us because FSIN was rarely used, by that point, and probably not by anyone who really needed scientific-grade accuracy. That's certainly why it didn't make bigger waves. However, it's an interesting bit of trivia, to say the least.
Actually, I tried to search a very specific case that mentioned the Pentium 4 and D (certain stepping) but I am unable to find it anymore.
I'm now wondering if it was an incorrect article and was deleted or changed.

I mean, its been almost 20 years since the Pentium D released.