News Intel's 1500W TDP for Falcon Shores AI processor confirmed — next-gen AI chip consumes more power than Nvidia's B200

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Then every product in the history of products is great. If you ignore all the things that a product doesn't do well , you can make the worst products that ever existed look good.

AMD markets the 7800x 3d as a gaming CPU because it is pretty horrible (for the price) on anything but games. If it was actually good on anything other than gaming it wouldn't be marketed as just a gaming CPU.

Would you ever use that argument for the 14900ks? Guys, it's a Single Threaded oriented CPU, so let's ignore every negative it has and focus on single threaded performance. That would be silly, so I refuse to do it for the 7800x 3d as well.
This is your most disingenuous argument yet, because the X3D chips wouldn't exist if they weren't good at gaming.

You're more concerned with the potential efficiency Intel chips can have (meaning after tweaking or running outside of stock) or specific situations. @bit_user is concerned more with stock behaviors and equivalent comparisons. You're both never going to agree on this one so please just stop with the back and forth so another thread isn't completely derailed.
 
This is your most disingenuous argument yet, because the X3D chips wouldn't exist if they weren't good at gaming.

You're more concerned with the potential efficiency Intel chips can have (meaning after tweaking or running outside of stock) or specific situations. @bit_user is concerned more with stock behaviors and equivalent comparisons. You're both never going to agree on this one so please just stop with the back and forth so another thread isn't completely derailed.
I have refrained from conversing in this thread because @bit_user , @TheHerald , @TerryLaze and myself cannot even agree on what base terms mean before we argue for pages and pages as you have stated. If nobody can agree on what the definitions of the terms we use before argumentation, there is no shot at some sort of conclusion or appreciation of the facts. It's easy to fall into ad hominem, and non-sequitur when no basis for an argument is agreed upon, and nobody can come to a solid conclusion of bad faith argumentation from any party without such a basis. Giving credit where its due, @Silas Sanchez also mentioned the above to some effect.
 
Last edited:
  • Like
Reactions: slightnitpick

slightnitpick

Upstanding
Nov 2, 2023
164
101
260
I have refrained from conversing in this thread because @bit_user , @TheHerald , @TerryLaze and myself cannot even agree on what base terms mean before we argue for pages and pages as you have stated. If nobody can agree on what the definitions of the terms we use before argumentation, there is no shot at some sort of conclusion or appreciation of the facts. It's easy to fall into ad hominem, and non-sequitur when no basis for an argument is agreed upon, and nobody can come to a solid conclusion of bad faith argumentation from any party without such a basis.
What the hell!!!!?? Saying you can't agree on what terms mean when a SIMPLE DICTIONARY can tell you all what they mean¿¡~

j/k, of course.

As a person watching the argument I basically stopped reading the back-and-forth after about 5 comments.
 
  • Like
Reactions: helper800

TheHerald

Upstanding
Feb 15, 2024
366
94
260
This is a simplistic argument. A better take is to say that not every product is right for every user. This is consistent with the idea that you cannot make a value judgement for other users. Just because one CPU has better MT performance than another doesn't mean it's a better fit for my needs. Only I know what are my priorities and can weigh its pros vs. its cons.

Seriously, if we could just assign one or two numbers that characterized the entirety of these products, don't you think people would do that? But, it's not that simple. That's why there are lots of different benchmarks and metrics. To make the optimal decision, each person needs to consider their constraints (cost, heat, noise, compatibility, etc.) and priorities and look at how the data on different products measures up.
That's where the veil of ignorance comes in. Yes, horrible products can be great for some specific people and niche usecases, but they are still horrible. See the method of veil of ignorance, which basically states, if you have no clue what you are going to do with your CPU (games, MT, ST etc.) which one would you choose? The 7800x 3d will have to drop to the bottom of the list.

"Horrible" is a strong word. Also, its efficiency is pretty good, especially considering other factors like its gaming performance.
Based on the price, it is horrible. Even excluding Intel cpus, it's slower in non gaming workloads by way cheaper AMD products.

I can't do that and neither can you, because you don't know how important the different aspects are to someone else.
I bet a paycheck most people that didn't buy a 7700k back in 2017 because "who plays at 1080p with a 1080ti" and "10% faster in games but 50% slower in MT performance" are now playing at 1080p with a 4090 rocking a 7800x 3d. I know, I've seen it happen in lots of communities forums etc.
You just said:
"... idle and light load power draw isn't (fixable), it's inherent to multi die cpus."​
I showed where a multi-die CPU can have decent power and efficiency. So, that's that.

If you are now trying to move the goalposts, then it's a different discussion.
How am I moving the goalposts man? Your example is the best case scenario for a multi die CPU and it's still nowhere near the top of the chart in efficiency. So can't you see there is a direct correlation between multi die cpus and high power draw at low and idle power states? Isn't it obvious that this is indeed the case?

It's an example. If we take the 5700G, that's interesting, but if someone wants better performance then it might not be an option for them. That's another example where it could be problematic to compare different classes of CPUs. It's like I said about the i5-13400F: you can't just look at its 1T efficiency in isolation. You also have to take into account its other attributes.
Do you realize that every single 12th or 13th gen CPU can hit the 13400 efficiency numbers more or less just by restricting the clockspeeds? I'd argue a 13700k can do better simply because it can hit those clockspeeds with lower power. That is not the case with AMD cpus. You can't get a 7950x and make it match the R7's efficiency simply because it has twice the dies, no matter how low you drop the clocks it will still be pretty mediocre in 1T efficiency.
 

bit_user

Polypheme
Ambassador
You're more concerned with the potential efficiency Intel chips can have (meaning after tweaking or running outside of stock) or specific situations. @bit_user is concerned more with stock behaviors and equivalent comparisons.
I think we're now mostly talking about stock vs. stock, but it's the product mismatches and cherry-picking of one result from one Intel CPU combined with result on a different metric from another Intel CPU to build a distorted picture that's the main issue.

You're both never going to agree on this one
If it were something as simple as what you described, then I absolutely could make a caveated statement of agreement on a narrow set of facts. But it goes beyond your characterization and then the sweeping statements built upon that house of cards that's really going too far.
 

TheHerald

Upstanding
Feb 15, 2024
366
94
260
This is your most disingenuous argument yet, because the X3D chips wouldn't exist if they weren't good at gaming.
You realize they can exist - be good at games - and not suck horribly on everything else, right? We already have an example of that, the 7950x 3d. GOAT of a CPU. The other 3d options are a joke.

You're more concerned with the potential efficiency Intel chips can have (meaning after tweaking or running outside of stock) or specific situations. @bit_user is concerned more with stock behaviors and equivalent comparisons. You're both never going to agree on this one so please just stop with the back and forth so another thread isn't completely derailed.
I'm fine with stock out of the box comparisons, but if the comparison is about efficiency - only a fool would buy a 14900k or a 14900ks and run blender on a loop without power limits - if he cares about efficiency. So talking about stock out of the box efficiency on a 14900k or a 14900ks is just pointless. Users that care about efficiency don't run it out of the box but with power restrictions, or they have bought the non k or T models.
 

bit_user

Polypheme
Ambassador
As a person watching the argument I basically stopped reading the back-and-forth after about 5 comments.
When a propagandist can't successfully win a faithless argument, it's not a bad consolation prize if they can bog down the discussion so badly that observers lose track and see it as little more than a he-said / she-said situation.

The reason this counts as a partial victory is that it seeds doubt in whatever narrative they were trying to undermine. Furthermore, when people aren't sure where the truth lies, the natural inclination is to assume it's somewhere in between, even if it's squarely on one side or the other.
 
  • Like
Reactions: slightnitpick

slightnitpick

Upstanding
Nov 2, 2023
164
101
260
When a propagandist can't successfully win a faithless argument, it's not a bad consolation prize if they can bog down the discussion so badly that observers lose track and see it as little more than a he-said / she-said situation.

The reason this counts as a partial victory is that it seeds doubt in whatever narrative they were trying to undermine. Furthermore, when people aren't sure where the truth lies, the natural inclination is to assume it's somewhere in between, even if it's squarely on one side or the other.
Sure, but I quit before even reaching that point. All I got from this is that when it comes time to upgrade I'll look at basic benchmarks myself, using my criteria, and decide from there.
 

bit_user

Polypheme
Ambassador
Based on the price, it is horrible. Even excluding Intel cpus, it's slower in non gaming workloads by way cheaper AMD products.
But, if someone primarily or exclusively cares about gaming, then the value-for-money isn't at all bad.

How am I moving the goalposts man?
You were carrying on about how chiplet CPUs are intrinsically inefficient and referencing this unproven anecdote about how your brother's Ryzen spikes up to 60 W, when browsing youtube comments. Showing a chiplet-based CPU doesn't necessarily behave that way is an easy bar for me to clear, so that's what I did. Now, you're trying to change the point of dispute to the question of whether any power or efficiency gap exists between chiplet and monolithic. I never said it didn't and I certainly didn't ever say that a chiplet-based CPU was the most efficient. You moved the goalposts.

Just to be clear, I'm not calling you or your brother liars. I'm just saying that unless the data is independently generated and properly investigated to see if there's some bug or configuration problem behind it, I have to treat it as merely an anecdote. It also presumes the workload is effectively single-threaded, which I highly doubt.

Do you realize that every single 12th or 13th gen CPU can hit the 13400 efficiency numbers more or less just by restricting the clockspeeds?
No, that's a tall claim and must be proven. Especially the part about Gen 12 or any CPUs based on the Alder Lake H0 stepping die. I also question the relevance. It seems like you're trying to change the topic yet again, in spite of @thestryker 's statement of exasperation at this exchange.

I suspect the other thing going on is that you're trying a backdoor to the same issue of mismatched product comparisons, where you're going to say that because someone can limit an i9-13900K to make it replicate the i5-13400F's efficiency, that it's valid to use its efficiency data for the i9-13900K. In which case, I would say that you also have to use performance data collected when the i9 is configured like that.

Performance, performance, and efficiency are different sides of the same coin. They cannot be treated separately. This is why I like plots like the one I made from that ComputerBase.de article, since it shows efficiency as the actual relationship between power and performance of the respective CPUs.

cj1qY3F.png


I'd argue a 13700k can do better simply because it can hit those clockspeeds with lower power. That is not the case with AMD cpus. You can't get a 7950x and make it match the R7's efficiency simply because it has twice the dies, no matter how low you drop the clocks it will still be pretty mediocre in 1T efficiency.
Heh, there you go with sweeping statements built on narrow facts. When I thought you were talking about efficiency, writ large, I was going to point out that you don't know the precise perf/W curve of Zen 4, without which you really can't say there's no point where it won't come out ahead. But, then comes the twist, where what sounds like a broad claim actually turns out to be this weird fixation you have with 1T efficiency.
 
Last edited:
You realize they can exist - be good at games - and not suck horribly on everything else, right? We already have an example of that, the 7950x 3d. GOAT of a CPU. The other 3d options are a joke.
GOAT of a CPU that's slower than the 7950X (and 13900K+) at MT and overall slower than a 7800X3D (and 13900K+) at gaming. The only way that CPU makes any sense to buy is if you're predominantly doing something that heavily favors cache and/or you can't have the power consumption of Intel.
 
  • Like
Reactions: bit_user

TheHerald

Upstanding
Feb 15, 2024
366
94
260
GOAT of a CPU that's slower than the 7950X (and 13900K+) at MT and overall slower than a 7800X3D (and 13900K+) at gaming. The only way that CPU makes any sense to buy is if you're predominantly doing something that heavily favors cache and/or you can't have the power consumption of Intel.
It's not slower than those 2. I mean it is - but because it is power limited (necessarily) lower. But I wouldn't run a 13900k or a 7950x at their default power configurations if I wanted MT performance, so they would be slower as well. It's not slower than the 7800x 3d in games unless you don't know what you doing.

The 7800x 3d isn't even that good in gaming unless you are running out of the box. Tuning your ram makes it like the 5th or worse faster overall gaming CPU. Just yesterday - I kid you not - guy on twitter was asking to test my 12900k vs his 7800x 3d in TLOU, and that I should prepare a bodybag for my CPU blabla. He kept on going for ever about it. Then we both posted the videos - he called me a cheater and blocked me. :love:

Ram tuning on Intel (and AMD) cpus gives huge performance in games that the 7800x 3d has by default due to the large cache
But, if someone primarily or exclusively cares about gaming, then the value-for-money isn't at all bad.
Well the same applied to multiple intel cpus in the past (eg 7700k) but I still considered them horrible. Exclusively caring about gaming doesn't mean much, do you have a 4090 at 1080p? If not you buy a 13600k or a 7600x, you don' t buy a cpu that until recently was over 400$.

You were carrying on about how chiplet CPUs are intrinsically inefficient and referencing this unproven anecdote about how your brother's Ryzen spikes up to 60 W, when browsing youtube comments. Showing a chiplet-based CPU doesn't necessarily behave that way is an easy bar for me to clear, so that's what I did. Now, you're trying to change the point of dispute to the question of whether any power or efficiency gap exists between chiplet and monolithic. I never said it didn't and I certainly didn't ever say that a chiplet-based CPU was the most efficient. You moved the goalposts.
But how are you showing that? Like really, you took single CCD low clocked zen 4 CPU which was still nowhere near the top.

Just to be clear, I'm not calling you or your brother liars. I'm just saying that unless the data is independently generated and properly investigated to see if there's some bug or configuration problem behind it, I have to treat it as merely an anecdote. It also presumes the workload is effectively single-threaded, which I highly doubt.
You can call us liars, no issue. Just look around. I googled and just found this - from PCworld's review about web browsing.

The 7950x was drawing on average about 120w, while RPL was about 80w.

That's system power ofcourse. There are multiple tests like these, technotice has tested autocad premiere photoshop etc. and he concluded that Intel was pulling a lot less power on average - but with higher peaks. I don't want to spam the post with video links, if you have doubt about what I'm claiming sure, ask and ill provide the videos.

No, that's a tall claim and must be proven. Especially the part about Gen 12 or any CPUs based on the Alder Lake H0 stepping die. I also question the relevance. It seems like you're trying to change the topic yet again, in spite of @thestryker 's statement of exasperation at this exchange.
If TPUP tested the C0 die (it's the most common btw) then that's basically using ALD pcores. Also the expanded graph I posted already has a 12400 scoring much higher in ST efficiency than the 13400f. Yes, the 13400 was slightly (5.07%) faster, but the gap in efficiency was huge. So it stands to reason that any Intel CPU can actually get similar efficiency. Notice I said similar, not identical, bigger dies with more cache will be slightly worse.

Heh, there you go with sweeping statements built on narrow facts. When I thought you were talking about efficiency, writ large, I was going to point out that you don't know the precise perf/W curve of Zen 4, without which you really can't say there's no point where it won't come out ahead. But, then comes the twist, where what sounds like a broad claim actually turns out to be this weird fixation you have with 1T efficiency.

It's not a fixation, 1T was what we were talking about. It was you that brought it up. You literally posted the 1T efficiency graph.
 
Last edited:

TheHerald

Upstanding
Feb 15, 2024
366
94
260
This is what he keeps doing, making general blanket statements that are unsound. The problem is people like him can't see their flaws as they are full of ego and attitude. That is exactly what cognitive bias is, and why we have a discourse to open up to other experts so we have a system that weeds out our bias, he can't be part of that system because his full of ego and attitude. A rational system by nature weeds out people like him.

I'd question whether binning makes a CPU great, I think binning is more a marketing term used to prey on people all pumped up and is really just a word for sorting grades of quality. In semiconductors grades of quality fundamentally has to do with the mobility of electrons & holes, more mobility equals more faster switching possibly, which is affected by the quality of the doping impurities as bad doping affects the precision of the dynamic interplay between drift and diffusion. The level of impurities like defects and grain boundaries in the crystal lattice which shunt away current. The quality of the lithograph technique and technology primarily etc. Higher quality and luck give rise to variation and affect these things which can help reduce the heat produced of the cpu at higher frequency. But the real advancements are due to architecture and overall implementation.

Saying "the most efficient CPU intel has produced" is just an unsound statement. It's unsound because it makes assumptions of behalf of others, a very rudimentary mistake but one he is not aware of. He simply can't get it through his head that efficiency can mean many things and their are different sounding ways to speak of efficiency, but he is asserting his own definition in a blanket statement. In science we have precise definitions so we can all be in agreement and to avoid troublemakers who dont really understand the scientific method. but outside of these precise definitions there are other types of efficiencies, some being very abstract.

A solar cell is said to be very efficient as it represents years of engineering and research, yet from a physics pov they are morbidly inefficient as they leave over 80% of the suns energy on the table in the realworld. The new generation of thin films are very efficient in a way as they are very cheap to make even for their rather poor conversion efficiency and lifespan, yet from an environmental pov they are horribly inefficient due to their short lifespan and waste. For many people solar is inefficient as it takes many years for it to pay for itself when they could just use less electricity and spend that money elsewhere. For some who prioritize pleasure they can actually lead to unhappiness which means an inefficient way of living life.
It's quite obvious that in a context of a CPU, efficiency refers to performance / watt. I'm pretty sure everyone (well, except you I guess) understands that and is talking about the same thing.

The scientific method involves something called ceteris paribus. It's fundamental to making empirical comparisons. That's why I insist on ISO power / ISO performance efficiency comparisons, cause not doing that makes your comparison flawed. The 14900ks - with all other things being equal (ceteris paribus) is the most efficient CPU Intel has ever made in MT performance. Of course im talking about desktop cpus, not huge xeons and threadrippers that cost thousands.

I haven't had the pleasure of testing a 14900ks, but comparisons between a 13900k and a 14900k that I have the pleasure of owning paints a very clear picture. Both of them configured at 5.5 ghz - having the exact same performance, the 14900k was pulling 270-280 watts running CBR23, the 13900k was sitting at 340-350 watts.
 

TheHerald

Upstanding
Feb 15, 2024
366
94
260
And now people should closely watch this video


Technotice, known for years for his creativity benchmarks, huge proponent of AMD all these years cause AMD was indeed better in these kinds of workloads, and what are his finding regarding 12-13-zen4? Exactly what I'm saying. Idle and light loads - zen 4 is a lot less efficient than Intel cpus. He is measuring both from software and from the wall, the differences are really staggering.

Real world usage is not removing power limits and hitting the blender button.
 

CmdrShepard

Prominent
Dec 18, 2023
371
273
560
Hey, the microplastics in the oceans and air didn't all put themselves there! We need to do our part!
; )
I suggest we feed the CEOs of plastic producing companies a couple of tonnes of plastic to offset this. We could extend that to the CEOs of oil extracting companies too, since it all started there.

Snark aside, how am I supposed to know that my "Socks with plastic (TM)" aren't shedding micro / nano particles which then end up in my blood vessel plaque lining?

We should stop buying that junk until they stop selling it and go back to cotton.
 
  • Like
Reactions: slightnitpick

CmdrShepard

Prominent
Dec 18, 2023
371
273
560
This is a simplistic argument. A better take is to say that not every product is right for every user.
But there are products like DM-SMR HDDs which are not right for every user. The only thing they are good for are for archival purposes and then only if you are always rewriting a whole drive.
This is consistent with the idea that you cannot make a value judgement for other users.
Yes you can. There are things which are universally bad (see DM-SMR).
Seriously, if we could just assign one or two numbers that characterized the entirety of these products, don't you think people would do that?
We could assign the numbers.

However, marketing is inherently based on deception so that's why manufacturers don't do it. Instead, they obfuscate stuff on purpose (see DM-SMR example above) and make broad and vague claims which appeal to the widest possible audience. Our psychology is such that most people won't admit they made a bad purchasing decision and bother with a refund so even when they eventually find out they were screwed by deceptive marketing they keep the product and try to find some good use for it -- that's how you get to "every product is good for someone" myth.
 
But it's not. Why would you even say that. I was making a joke. Are you really not intelligent enough to see that was sarcasm?
The below can be interpreted as serious, a joke, or otherwise regardless of intelligence. Who knows, maybe my meager 115 IQ is not enough with English as my first language to gauge the nuances of the the 8 word 1 liner with no context to your intent.
Why does Intel hate the climate so much?
 

danny009

Honorable
Apr 11, 2019
462
35
10,720
No one cares Intel, fix shady issues with Intel Management Service, Stop increasing prices without notice outside of the US, AND release that Bra Lake CPUs already.

No one and not even my 5th born child also does not care your anti-consumer businesses with AI.
 

bit_user

Polypheme
Ambassador

bit_user

Polypheme
Ambassador
Snark aside, how am I supposed to know that my "Socks with plastic (TM)" aren't shedding micro / nano particles which then end up in my blood vessel plaque lining?
Yuck.

We should stop buying that junk until they stop selling it and go back to cotton.
Truth be told, I would miss plastic clothes in the winter. I hope there's some kind of safe bio-polymer alternative, that we could use instead. I'd hate to go back to wearing all that wool, plus I'm sure there aren't enough sheep to clothe everyone living in countries where it does still sometimes get cold.
 
  • Like
Reactions: helper800
Yuck.


Truth be told, I would miss plastic clothes in the winter. I hope there's some kind of safe bio-polymer alternative, that we could use instead. I'd hate to go back to wearing all that wool, plus I'm sure there aren't enough sheep to clothe everyone living in countries where it does still sometimes get cold.
Don't forget about the PFA's that bioaccumulate from all of the non-stick/hydrophobic clothings.
 

bit_user

Polypheme
Ambassador
It's quite obvious that in a context of a CPU, efficiency refers to performance / watt. I'm pretty sure everyone (well, except you I guess) understands that and is talking about the same thing.
A few points about this:
  • It's workload-dependent.
  • There are other aspects of the measurement methodology that also matter. Like what OS, configuration, compiler, mitigations enabled/disabled?, hardware performance tuning?, number of different hardware samples, ambient temperature & humidity, amount of cooling, etc.
  • Do you measure perf/W or Joules? If the former, how do you filter out the noise & variations?
  • Do you measure package power or system power? With software reporting or via hardware?

I'm sure we could come up with more, but that should be enough to show that two people can measure performance of the same hardware setup (i.e. CPU, motherboard, cooler, etc.) and arrive at very different answers.

I haven't had the pleasure of testing a 14900ks,
So, can you cite even one instance where someone has posted quantitative evidence supporting your assertion?
 

CmdrShepard

Prominent
Dec 18, 2023
371
273
560
I'd hate to go back to wearing all that wool, plus I'm sure there aren't enough sheep to clothe everyone living in countries where it does still sometimes get cold.
Don't worry, all those "AI" datacenters running inference and training 24/7/365 will make sure temperatures never go below zero again. /s
It's workload-dependent.
Yes, but we can take several typical workloads -- ST code, ST code + SIMD, MT code, MT code + SIMD, integer & floating point variations, random memory access (latency), memory bandwidth, in cache & out of cache workloads.

Thsoe metrics are enough to judge whether a CPU performs good enough for your tasks -- you don't really need to test every possible game and application to know that.
Like what OS, configuration, compiler, mitigations enabled/disabled?, hardware performance tuning?, number of different hardware samples, ambient temperature & humidity, amount of cooling, etc.
Impact of all that would at most add up to 10% and if same testing methodology is used for every CPU they would have the same handicaps or advantages.
Do you measure perf/W or Joules? If the former, how do you filter out the noise & variations?
Perf/W by testing say 10 retail samples from different shops then average them.
Do you measure package power or system power?
I am sure if a mainboard manufacturers wanted it, they could have produced nearly identical Intel and AMD mainboards which differ only in socket and a bit of trace layout around it. You could then report system power with some common components.
With software reporting or via hardware?
Ideally both at least until we can prove that software reporting is as good as hardware.
Real world usage is not removing power limits and hitting the blender button.
Hey, don't kink-shame!
 

TheHerald

Upstanding
Feb 15, 2024
366
94
260
So, can you cite even one instance where someone has posted quantitative evidence supporting your assertion?
That the 14900ks is more efficient than eg. the 14900k? It's self evident that on average it should hit the same clocks at lower voltage. That's the whole point of binning