News Intel's 1500W TDP for Falcon Shores AI processor confirmed — next-gen AI chip consumes more power than Nvidia's B200

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

Ogotai

Reputable
Feb 2, 2021
332
222
5,060
That the 14900ks is more efficient than eg. the 14900k? It's self evident that on average it should hit the same clocks at lower voltage. That's the whole point of binning
then post a source for this claim. so far, what you have been typing is meaning less, as you dont seem to provide much when it comes to sources are proof... looks like mostly your personal biased opinion
 
  • Like
Reactions: bit_user
That the 14900ks is more efficient than eg. the 14900k? It's self evident that on average it should hit the same clocks at lower voltage. That's the whole point of binning
So logically speaking, you are saying that we cannot even question a company with profit motives as to the truthfulness of claims made about their products? So the implication is that because Intel said they did binning means it must be so? The 14900KS is only ever better than the 14900K if it was indeed binned to an extent that it can clock higher at the same voltage/power. That is a claim that can certainly be tested, as any product should be, to keep companies honest.

Just to be clear here, I do not believe that nothing claimed is real until it is empirically tested, just pointing out the choppy logic of such a statement that claims made by a company about it's products are 'self evident' with no testing required.
 

TheHerald

Upstanding
Feb 15, 2024
389
99
260
So logically speaking, you are saying that we cannot even question a company with profit motives as to the truthfulness of claims made about their products? So the implication is that because Intel said they did binning means it must be so? The 14900KS is only ever better than the 14900K if it was indeed binned to an extent that it can clock higher at the same voltage/power. That is a claim that can certainly be tested, as any product should be, to keep companies honest.

Just to be clear here, I do not believe that nothing claimed is real until it is empirically tested, just pointing out the choppy logic of such a statement that claims made by a company about it's products are 'self evident' with no testing required.
Well I know empirically (because I've used them) that a 13900k and a 14900k cannot clock to 5.9 ghz that is the stock all core of a KS. Or to phrase it better, some 13900k and 14900k that are godly binned can, but a lot of them cannot. So on average the binning on the 14900ks is better, just like the binning on the normal 14900k was better than the 13900k. As I've said before, both my 13 and 14 900k were terribly binned, yet locked both at 5.5ghz the 14900k was using 70w less power. Which actually a pretty big difference.

I know people meme on KS editions because "lets remove all power limits and run blender on a loop" but the reality of it is they are spectacularly efficient chips, and i'd consider buying one for the efficiency alone.
 
Well I know empirically (because I've used them) that a 13900k and a 14900k cannot clock to 5.9 ghz that is the stock all core of a KS. Or to phrase it better, some 13900k and 14900k that are godly binned can, but a lot of them cannot. So on average the binning on the 14900ks is better, just like the binning on the normal 14900k was better than the 13900k. As I've said before, both my 13 and 14 900k were terribly binned, yet locked both at 5.5ghz the 14900k was using 70w less power. Which actually a pretty big difference.

I know people meme on KS editions because "lets remove all power limits and run blender on a loop" but the reality of it is they are spectacularly efficient chips, and i'd consider buying one for the efficiency alone.
What do you suppose is the reason as to the discrepancies between your 13 vs 14 900k CPUs' efficiencies as it relates to wattage used? Can you, on logical grounds, inductively reason that the difference in power use is binning? There are several reasons why I would argue you cannot come to this conclusion without standing on very shaky logical ground. It could be that the ASIC quality of your 14900k is significantly better rather than the binning that allows it to clock the same at ~80w less. It could also be that your 13900K has exceptionally bad ASIC quality as compared to your 14900k. There are several more reasons that would need to be ruled out first before coming to some sort of generalized architectural conclusions about efficiency.

To be clear, ASIC and silicon quality are BOTH binned for, however, the implication is that what is being binned for is silicon quality in the above. Less leaky chips tend to overclock a bit more poorly than leakier chips. Higher silicon quality chips tend to OC better but are inherently less efficient at lower voltages because they are usually leakier. There are chips that have good or bad ASIC and silicon quality.
 
Last edited:

TheHerald

Upstanding
Feb 15, 2024
389
99
260
What do you suppose is the reason as to the discrepancies between your 13 vs 14 900k CPUs' efficiencies as it relates to wattage used?
Yes, better silicon quality, which is the same thing as binning.

I don't have to trust Intel that the 14900ks is binned, it is binned. All cpus are getting binned, that's how you get cpus down the stack (14700, F series with no igpus etc). For the 14900ks to not have better silicon quality than the rest of the chips means that although Intel bins the chips, they also choose the worst silicon quality for the 14900ks part and they sell the good silicon quality ones as normal 14900k. It simply makes no sense.
 
Yes, better silicon quality, which is the same thing as binning.

I don't have to trust Intel that the 14900ks is binned, it is binned. All cpus are getting binned, that's how you get cpus down the stack (14700, F series with no igpus etc). For the 14900ks to not have better silicon quality than the rest of the chips means that although Intel bins the chips, they also choose the worst silicon quality for the 14900ks part and they sell the good silicon quality ones as normal 14900k. It simply makes no sense.
No, that is incorrect. The best silicon quality chips go to Intel's higher end Xeon chips, and mobile chips in the case of high ASIC quality. If there were no checks on the company they could just say a 14900ks is better but then sell you a chip with the same silicon quality as a 14900k but with higher TDP for better performance.

You also completely ignore the fact that better silicon quality is not the only factor in binning. ASIC quality is highly important to the efficiency of the chip irrespective of its silicon quality. You can have a very tight ASIC quality chip with poor silicon meaning it wont clock very high but the clocks that it can achieve are going to be very efficient. Typically higher clocking chips are leakier than others meaning they are less energy efficient.
 
  • Like
Reactions: bit_user

TheHerald

Upstanding
Feb 15, 2024
389
99
260
No, that is incorrect. The best silicon quality chips go to Intel's higher end Xeon chips, and mobile chips in the case of high ASIC quality
You are wrong (do xeons even use the raptorlake die??) but it doesn't matter even if you aren't. Still the KS takes precedence over the K regardless of laptops and xeons.

If there were no checks on the company they could just say a 14900ks is better but then sell you a chip with the same silicon quality as a 14900k but with higher TDP for better performance.
The fact that the KS is running at clockspeeds than not all 13900k and 14900k can achieve makes it super obvious that it is not the case.

You also completely ignore the fact that better silicon quality is not the only factor in binning. ASIC quality is highly important to the efficiency of the chip irrespective of its silicon quality. You can have a very tight ASIC quality chip with poor silicon meaning it wont clock very high but the clocks that it can achieve are going to be very efficient. Typically higher clocking chips are leakier than others meaning they are less energy efficient.
You do understand that ASIC and silicon quality are the same thing? Or what do you mean by silicon quality? What is that?
 
You are wrong (do xeons even use the raptorlake die??) but it doesn't matter even if you aren't. Still the KS takes precedence over the K regardless of laptops and xeons.


The fact that the KS is running at clockspeeds than not all 13900k and 14900k can achieve makes it super obvious that it is not the case.


You do understand that ASIC and silicon quality are the same thing? Or what do you mean by silicon quality? What is that?
ASIC and silicon quality are two distinctly different things. Intel definitely has raptorlake Xeons. A large portion of 13/14 900K's can get to stock 14900KS clocks.

Silicon quality loosey refers to the ability of a chip to reach higher clocks than others because there are less imperfections in the wafer used for the chip, better doping, et cetera. ASIC quality is a rather poor term used to define how much or little voltage leakage any given chip has. Typically a very tight chip, meaning very little leakage, will not overclock very well, but will be very efficient at the clocks it can attain. A leaky chip will be less efficient, but typically can attain higher clocks.
 
Last edited:
  • Like
Reactions: bit_user

TheHerald

Upstanding
Feb 15, 2024
389
99
260
ASIC and silicon quality are two distinctly different things. Intel definitely has raptorlake Xeons. A large portion of 13/14 900K's can get to stock 14900KS clocks.

Silicon quality loosey refers to the ability of a chip to reach higher clocks than others because there are less imperfections in the wafer used for the chip, better doping, et cetera. ASIC quality is a rather poor term used to define how much or little voltage leakage any given chip has. Typically a very tight chip, meaning very little leakage, will not overclock very well, but will be very efficient at the clocks it can attain. A leaky chip will be less efficient, but typically can attain higher clocks.
The raptorlake Xeons aren't using the desktop die man. They don't have ecores.

It's not exactly the case with ASIC quality. It's way more complicated than that. Higher ASIC quality is the better overclocker - as long as extreme oc (ln2 etc.) are not into the mix..
 

bit_user

Polypheme
Ambassador
Well I know empirically
But you didn't try the actual experiment, so you don't. And given how many of your assumptions have been wrong, I think even you shouldn't trust them.

Yes, better silicon quality, which is the same thing as binning.

I don't have to trust Intel that the 14900ks is binned, it is binned.
You assume there's only one parameter used for binning, but that's not necessarily so.

The raptorlake Xeons aren't using the desktop die man. They don't have ecores.
Right here, it says they're "Code Name: Products formerly Raptor Lake" and the Ordering and Compliance Information says they're even B0 stepping:
 
Last edited:

TheHerald

Upstanding
Feb 15, 2024
389
99
260
What die do you think they're using then?
Right here, it says they're B0 stepping:

Check every spec on them. They match Raptor Lake except for the E-cores and iGPU, which is pretty obviously because Intel simply disabled them.
I'm addressing you both since you are kinda asking the same thing.

I don't know what die they are using, as I've said previously I don't really care about server CPUs, but I'd assume. If they indeed use the same die - that's weird, cause those look more like salvaged parts rather than fully functioning dies.

Regardless, it really really really doesn't matter. It doesn't make a difference to the argument. If the Xeons do indeed use the best dies - then the 14900ks is using the 2nd best dies and the normal 14900k the 3rd best dies etc.
But you didn't try the actual experiment, so you don't. And given how many of your assumptions have been wrong, I think even you shouldn't trust them.


You assume there's only one parameter used for binning, but that's not necessarily so.
Actual experiment meaning the 14900ks? No I haven't, but that's exactly how it has worked with every other CPU before hand, (12900k ---> 12900ks, 13900k ---> 13900ks ---> 14900k) so I can't fathom why this would possibly be any different. I'm so confident about it i'm willing to bet money than if you grab all these 3 cpus (13900k, 14900k , 14900ks) and lock them to the same clocks the 14900ks will be drawing the least amount of power.

You'd be amazed at how huge the difference is between a 13900k and a 14900k. The 13900k was really struggling to lock to 5.5ghz in all core workloads without undervolting, the 14900k was literally just chilling at 80-85c.
 
I'm addressing you both since you are kinda asking the same thing.

I don't know what die they are using, as I've said previously I don't really care about server CPUs, but I'd assume. If they indeed use the same die - that's weird, cause those look more like salvaged parts rather than fully functioning dies.

Regardless, it really really really doesn't matter. It doesn't make a difference to the argument. If the Xeons do indeed use the best dies - then the 14900ks is using the 2nd best dies and the normal 14900k the 3rd best dies etc.
There's only one desktop Raptor Cove die so they're certainly using the same as desktop and what's somewhat notable is that it's B0 throughout the entire stack (lower desktop uses a mix, but predominantly H0 ADL).

I don't actually think the Xeons got the best die in the case of these SKUs given their clocks, disabled graphics and E-cores. I just think being accurate with regards to what is being said matters.

Based on the bin testing of 13th Gen KS > K, but KF is a mix of both as Intel doesn't bin those die at all. This doesn't actually make the KS necessarily the most efficient chips just best for the VF curve. Roman (der8auer) has spoken about how the most efficient chips aren't necessarily the best at clocks and it has been especially true with 12th+. I have no doubt that mobile bins get the most efficient B0 die, and given their relative sales volume probably take away from what would otherwise be KS.
 

TheHerald

Upstanding
Feb 15, 2024
389
99
260
There's only one desktop Raptor Cove die so they're certainly using the same as desktop and what's somewhat notable is that it's B0 throughout the entire stack (lower desktop uses a mix, but predominantly H0 ADL).

I don't actually think the Xeons got the best die in the case of these SKUs given their clocks, disabled graphics and E-cores. I just think being accurate with regards to what is being said matters.

Based on the bin testing of 13th Gen KS > K, but KF is a mix of both as Intel doesn't bin those die at all. This doesn't actually make the KS necessarily the most efficient chips just best for the VF curve. Roman (der8auer) has spoken about how the most efficient chips aren't necessarily the best at clocks and it has been especially true with 12th+. I have no doubt that mobile bins get the most efficient B0 die, and given their relative sales volume probably take away from what would otherwise be KS.
It's hard to get actual data since most reviews are focused on hitting 500 watts running blender since **itting on Intel drives more traffic, so if I ever wanna know I need to test it myself. But let's just say that I'd be shocked if the ks wasn't hitting 5.5 all core with considerable ease at very low power levels.
 
It's hard to get actual data since most reviews are focused on hitting 500 watts running blender since **itting on Intel drives more traffic, so if I ever wanna know I need to test it myself. But let's just say that I'd be shocked if the ks wasn't hitting 5.5 all core with considerable ease at very low power levels.
The point of the correspondence above was not to say the 14900KS is or is not the fastest or even that they are more or less efficient, I was try challenge your thought process and axioms that led you to these conclusions. I am not even really far away from what you believe. The problem I have with what you are saying is often how you present your claims, and how they are often generalities about something super specific. I can usually glean more out of a person's conclusions about a particular topic by examining their thought process on how they came to such conclusions, rather than arguing about the conclusions themselves.

I understand that the prevailing motivation is to **it on Intel for their power profile schemes, but that does not automatically mean that there is nothing to the overblown conversation. I enjoyed this conversation much more with you than the ones in the past. Just as a point of reference, I am often incorrect in my assumptions, but I typically learn a lot being called out for them because I know I am prone to making them and try to correct my understandings. I love this community for the very reason I am able to learn so much about tech in these very same comments sections of the forums.
 
  • Like
Reactions: bit_user

TheHerald

Upstanding
Feb 15, 2024
389
99
260
I understand that the prevailing motivation is to **it on Intel for their power profile schemes
The prevailing motivation is to "shit on Intel" full stop. Yeah nowadays it happens to be because of their power profiles on the fully unlocked and preoverclocked K cpus because that's all those popular tech tubers have. Before it was about the lack of MT performance (see how nobody talks about that anymore now that Intel has a huge lead, funny huh?). Intel has a BIG lead in games "who plays at 720p with an 800$ gpu". Amd has a tiny lead in games and suddenly everyone plays at 720p with a 2000$ GPU. It's tiresome.

I find it extremely funny that so much emphasis is being put around efficiency nowadays but they don't bother to test a locked CPU - let's say a 14900 non K. It would top the entire chart in efficiency and yeah, we can't have that now can we?
 

bit_user

Polypheme
Ambassador
It's hard to get actual data since most reviews are focused on hitting 500 watts running blender
...
Yeah, but you can't just assert something to be true for lack of data showing otherwise. If you believe something is true but lack proof, you should at least couch your language as speculation.

The other thing you might do is try to contact some of these publications and youtubers to let them know why you think it's a worthwhile test for them to try and see if you can get them to do it. A lot of sites should have a i9-14900KS CPU just sitting around, if they've reviewed it in the past.
 

TheHerald

Upstanding
Feb 15, 2024
389
99
260
Yeah, but you can't just assert something to be true for lack of data showing otherwise. If you believe something is true but lack proof, you should at least couch your language as speculation.

The other thing you might do is try to contact some of these publications and youtubers to let them know why you think it's a worthwhile test for them to try and see if you can get them to do it. A lot of sites should have a i9-14900KS CPU just sitting around, if they've reviewed it in the past.
Right, because gnexus and hwunboxed don't already understand that's how you measure efficiency and that's how people are going to run their CPUs, they are waiting for me to tell them. As if there is a single person alive that buys a 14900ks to run blender for 10 hours a day without power limits. Come on man.

Even if they by some miracle do, their numbers are not to be trusted. Anandtech / hwunboxed /tpup already have some completely made up graphs at ISO power so yeah, not very interested in what they have to say. Obviously wasn't on purpose ( :D ) it just so happens that every single time their graphs portray intel a lot less efficient than is in reality. Unlucky I guess