News Lisa Su announces AMD is on the path to a 100x power efficiency improvement by 2027 — CEO outlines AMD’s advances during keynote at imec’s ITF Worl...

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
People, what is up with the AMD hate?
I honestly dont understand that hate. And it’s everywhere.
Worst offenders? Linux users.

I have never seen a biggest group of hypocrites than them.

They demand that everything is FOSS compliant and source code to be available.

AMD does just that and they hate them, but the greedy ones do the other way, push binary closed blobs and they cant get enough praises and sales.
AMD GPUs are just fine and dandy I have a couple as well as a couple of Nvidias incl the 4090 and 3090, in non RT games its really neck and neck in fact last gen cards AMD usually tops it
And at a lower price, but as stated above, its this irrational hate for AMD that wont allow them to see that.
but once you add the Nvidia developed RT the yes AMD loses out, now I have to be honest and say I really struggle to see where the RT actually makes much of a difference in games, if i really look hard yes I can tell but when playing I dont have time tbh.
Preach!

I will say that a couple and i mean literally a couple of games looks great with RT, 99% of the rest dont show anything besides a mirror or a puddle where you can pause long enough to admire it, to justify the insane performance hit.

But again, anything that can be used to hate on AMD and be a white knight for a 2T (Bubly at the moment) company .
 
This feels like a what have you done for me lately problem. Cool AMD, that 100x thing sounds super awesome on paper, but nobody in the know really believes it. We know you will use the exact same amount of power total, and that it is just a type of green washing...also everyone will likely have a large increase in efficiency in this arena as it is the big push....always is a same power more production pipeline.

This just feels a little desperately times, or something. AMD needs to actually compete with Nvidia head to head at some point here instead of always being off by a bit with some caveat. Would love a real Nvidia competitor.
This makes it very clear that u have no idea of what underlies AMD's chiplet architecture & Infinity Fabric.

Its even explained here when the compactness of 12? chiplets and memory is discussed - One huge module vs a decentralized & inefficient networklink ups of resources.

Its widely known AMD has a big lead in this. She is simply stating the obvious fact that this can no longer be regarded as a trifle by customers.

Concurrently, Nvidias lauded software moat, is getting shallower in client's eyes
 
Sounds like when Intel said Ray tracing will make games look real life because its so much more realistic. I was bummed because crysis only had regular lighting. Turns out RT is only one of many thing that makes games look good.

Thats why these hype claims are pretty much best ignored. Because predicting the future never works out. Remember Intel actually believing we would have 8+GHz CPUs?

RT certainly looks alot better than none, its very noticeable in wet conditions and smooth reflective surfaces, much less noticeable in open natural environments. Compared to on PC with path tracing, cyberpunk on PS5 looks like PS3 graphics imo.
 
Remember NFTs? How useless and valueless it is, but still, for a moment, everyone was talking about it and spending money on it?

Now take that and apply to raytracing. It doesn't matter at all if RT is viable or if it makes a difference in image quality, it's the new tech and people want it. So stop saying "RT is not important" because the lack of good RT is a big part of the low sales of Radeon cards.

Their CPUs are amazing, helped by the fact that their competitor stopped being competitive (but still consumer perception makes a lot of people buy Intel "because it's better, right?").

AMD has done a lot of great things, but they still have a lot more to catch up. I'm confident they can reach the 100x target, and that they will keep being competitive, but I really hope they can work out the other stuff that always put them in second place.
 
  • Like
Reactions: valthuer
Remember NFTs? How useless and valueless it is, but still, for a moment, everyone was talking about it and spending money on it?

Now take that and apply to raytracing. It doesn't matter at all if RT is viable or if it makes a difference in image quality, it's the new tech and people want it. So stop saying "RT is not important" because the lack of good RT is a big part of the low sales of Radeon cards.

Exactly.

If you recall, 3dfx was using similar arguments about 32 bit color: the loss in performance would be too great to justify the "minimal" improvement in image quality. So, 32 bit rendering doesn't matter.

Until it mattered and it was too little too late.
 
100x efficiency! This sounds great! That means I will be able to have a 2W 5700XT and a 2W 5950X in a handheld with great battery life in 2027! And a 0.3w 4700u with vertically stacked ram in a wristwatch with a USB4 port! Sign me up!

This is what

"Lisa Su says AMD is on track to a 100x power efficiency improvement by 2027"​

sounds like to me. And it is what she expressed.

That is simply misleading, and it is not hateful to defend objective reality.

And taking credit for TSMC's GAAFET? And the mention of the AM286? AMD was still making reverse engineered clones of Intel chips for 11 more years after that: https://www.tomshardware.com/picturestory/713-amd-cpu-history.html

AMD is doing well in certain efficiency scenarios. There is no need to falsely embellish.
 
AMD is making their strategy pretty clear - they are going after the datacenter and hosted AI/GPU market where power efficiency is becoming increasingly important. Margins in that space are also much higher than on consumer hardware. The issue they face is still driven by CUDA until the market fully embraces some of the newer advances in Pytorch and such that minimize CUDA-specific optimizations.
 
  • Like
Reactions: bit_user
This feels like a what have you done for me lately problem. Cool AMD, that 100x thing sounds super awesome on paper, but nobody in the know really believes it. We know you will use the exact same amount of power total, and that it is just a type of green washing...also everyone will likely have a large increase in efficiency in this arena as it is the big push....always is a same power more production pipeline.
It seems like most people in this thread don't understand that:
  1. Efficiency is the ratio of work divided by the amount of energy needed to perform it. You can increase efficiency by increasing the numerator (i.e. the amount of work accomplished), decreasing the denominator (i.e. the amount of energy), or some combination.
  2. She's talking primarily about datacenter, here. If you just look at the slides, that becomes quite clear.

In a seemingly weird twist on #1, we've recently seen a lot of examples that simultaneously increase efficiency and power. Speaking of AMD, their Zen 4-based Genoa EPYC is more efficient than Zen 3-based Milan, while also using more power. According to Phoronix' launch day testing, perf/W of the respective flagship models improved 38%, from the EPYC 7763's 0.74 points/W to the EPYC 9654's 1.02 points/W, even while average power consumption increased 31.8%.

AyLbrXP.png


mQ6qhzv.png


Source: https://www.phoronix.com/review/amd-epyc-9654-9554-benchmarks


100x efficiency! This sounds great! That means I will be able to have a 2W 5700XT and a 2W 5950X in a handheld with great battery life in 2027! And a 0.3w 4700u with vertically stacked ram in a wristwatch with a USB4 port! Sign me up!
Simply looking at their trajectory, so far, makes it pretty clear they're not targeting lower-power products. So, we can indeed anticipate that their future products will use roughly similar amounts of power. Instead, they must be focusing on increasing performance within that existing envelope, rather than simply throwing more Watts at it (dishonorable mention: Intel and their upcoming 1.5 kW datacenter GPU).

However, if you look at how much compute performance Apple managed to pack into their latest iPad Pro, you're not too far off with your 2W example. Being M4-based, it indeed has stacked RAM.
 
Last edited:
It seems like most people in this thread don't understand that:
  1. Efficiency is the ratio of work divided by the amount of energy needed to perform it. You can increase efficiency by increasing the numerator (i.e. the amount of work accomplished), decreasing the denominator (i.e. the amount of energy), or some combination.
  2. She's talking primarily about datacenter, here. If you just look at the slides, that becomes quite clear.


Simply looking at their trajectory, so far, makes it pretty clear they're not targeting lower-power products. So, we can indeed anticipate that their future products will use roughly similar amounts of power. Instead, they must be focusing on increasing performance within that existing envelope, rather than simply throwing more Watts at it (dishonorable mention: Intel and their upcoming 1.5 kW datacenter GPU).

However, if you look at how much compute performance Apple managed to pack into their latest iPad Pro, you're not too far off with your 2W example. Being M4-based, it indeed has stacked RAM.
Oh I get it and I am not saying it won't happen. I am saying this is normal not exceptional and that everyone is doing it. The timing of the presentation and lack of anything actually exciting is more what I was knocking here. Those 100x numbers are corporate green washing to me. They are just going to require more for what they load on them so total power savings are nothing, and they aren't saying they are....but that is the real point of this presentation.

That and to act like competition isn't doing it as well.
 
This makes it very clear that u have no idea of what underlies AMD's chiplet architecture & Infinity Fabric.

Its even explained here when the compactness of 12? chiplets and memory is discussed - One huge module vs a decentralized & inefficient networklink ups of resources.

Its widely known AMD has a big lead in this. She is simply stating the obvious fact that this can no longer be regarded as a trifle by customers.

Concurrently, Nvidias lauded software moat, is getting shallower in client's eyes
Right which is why AMD is outselling Nvidia in....what market?

Sorry but it doesn't appear that the rest of the AI market or the GPU market agrees with you about any of this. I am not saying AMD's tech is complete crap, but they always seem to fall a bit short and Nvidia runs over them. If Intel ever managed to get their ship in order again they are gonna have a real problem because they very rarely com out on top with Nvidia and have been surviving on Intels failures more than their successes for quite a while.
 
I am saying this is normal not exceptional and that everyone is doing it.
Yes, but AMD is throwing down another stake (or, I guess reaffirming an earlier goal) and sharing a rough outline of their plans for achieving it. While you could get a "free ride" on efficiency, merely by enjoying the benefits provided by new process nodes and technologies like HBM or on-package LPDDR5X, AMD is stating that they've made efficiency a priority, quite possibly over and above performance!

If you read the article, you'll understand why. I think I heard a projection, made years ago, that if datacenter energy use continues to grow on its (then) current trajectory, that the oceans would boil by like 2100. I have no idea what that was based on or if I'm even remembering it correctly, but the slides make it clear that power is going to be a limiting factor in datacenter performance, before long. So, the focus needs to be on improving perf/W a lot more than raw performance.
 
Last edited:
It seems like most people in this thread don't understand that:
  1. Efficiency is the ratio of work divided by the amount of energy needed to perform it. You can increase efficiency by increasing the numerator (i.e. the amount of work accomplished), decreasing the denominator (i.e. the amount of energy), or some combination.
  2. She's talking primarily about datacenter, here. If you just look at the slides, that becomes quite clear.

In a seemingly weird twist on #1, we've recently seen a lot of examples that simultaneously increase efficiency and power. Speaking of AMD, their Zen 4-based Genoa EPYC is more efficient than Zen 3-based Milan, while also using more power. According to Phoronix' launch day testing, perf/W of the respective flagship models improved 38%, from the EPYC 7763's 0.74 points/W to the EPYC 9654's 1.02 points/W, even while average power consumption increased 31.8%.

If compared apples to apples, Lisa's claim is the next gen Epyc based on Zen 5 will be 30x as efficient as Epyc based on Zen 2 (from 2020). And the next gen will be over 3x as efficient as that. Which is absurd. That's apples to apples though, not some Nvidia type heavily qualified "Pascal is 10x faster than Maxwell" or a 20 pflops fp4 is 30x faster than 4 pflops fp8.

I just don't want some claim about some specific scenario to be given, or taken as true in an average of scenarios in general. Whatever the company that makes it. That being said, there probably is some case of some task that will be able to be done 100x as efficiently, in the server space, by some new combination of AMD designed hardware and using some more efficient methods.

But that should be a specific claim on efficiency and not a blanket one that needs some educated (and biased) contextual interpretation for it not to be a lie. (I added biased because there will be likely be unspecified variables that will need to be chosen in such a way as to make the statement true where they could validly also be chosen to make the statement false)

Also the task is not static. The task is likely to be "do as much as possible of this AI stuff as fast as possible, irregardless of the power consumption". So no real efficiency will be gained, much less the specified servers consuming 1/100th the power for the same task of "do as much AI as possible right now". The actual task will wind up consuming more power because it's compute needs will have greatly increased and because the hardware depreciation is a far larger cost than the power consumption.

Simply looking at their trajectory, so far, makes it pretty clear they're not targeting lower-power products. So, we can indeed anticipate that their future products will use roughly similar amounts of power. Instead, they must be focusing on increasing performance within that existing envelope, rather than simply throwing more Watts at it (dishonorable mention: Intel and their upcoming 1.5 kW datacenter GPU).

However, if you look at how much compute performance Apple managed to pack into their latest iPad Pro, you're not too far off with your 2W example. Being M4-based, it indeed has stacked RAM.
I don't think the latest iPad Pro can hold up to the Steam Deck handheld in the top 100 most played games on Steam, much less a CPU/GPU combo close to 10x as fast. I also think it would use considerably more than 4w during gaming. I also don't see a .3w wristwatch based system becoming a viable desktop alternative anytime soon where you could put your watch in the pc dock/charger at work and use it for your job. Not even with an Apple Watch.

I was just pointing out how silly the 100x efficiency claim is if you look at it in an apples to apples comparison I.E. the things normal people do on a PC taking 1/100th the power.
 
If compared apples to apples, Lisa's claim is the next gen Epyc based on Zen 5 will be 30x as efficient as Epyc based on Zen 2 (from 2020).
Well, if you look at how she defines it, she's including things like VNNI. If you compare an EPYC with 3D V-cache running VNNI instructions to do inferencing, then I actually think it's plausible. Consider that we're talking about 2x the cores running between 1.5x and 2x as fast (here I'm counting the product of VFP IPC and clock speed increases), with 2-4x as wide vector floating point and about 4x the data density... it's not out of reach.

Now, I happen to think it's a little silly to weight these metrics too strongly in favor of CPU inferencing, but there was also a lot of emphasis on MI300X in the slides.

Also the task is not static. The task is likely to be "do as much as possible of this AI stuff as fast as possible, irregardless of the power consumption". So no real power savings will be achieved, much less the specified servers consuming 1/100th the power.
I think we all have an intuitive sense that if you give people more compute power per $ or compute per W, they will simply use more. So, while she's probably right that power will soon become a limiting factor in the rate of computational improvements, building more efficient CPUs and GPUs is mainly about keeping a performance & TCO lead over the competition than about relative reductions in global energy usage.

I don't think the latest iPad Pro can hold up to the Steam Deck handheld in the top 100 most played games on Steam,
I expect it does, but this is too far outside my realm of expertise.

I also think it would use considerably more than 4w during gaming.
Just how much heat do you think that ultra-thin tablet can dissipate? I suggest you try actually looking for some decent-quality reviews. You might be surprised.

I also don't see a .3w wristwatch based system becoming a viable desktop alternative anytime soon
Eh, is it really so far fetched, though? Ampere Computing is claiming their next gen CPUs will scale up to 256 cores in 350 W, made on TSMC 3 nm. I realize that's 4x as much power as you said, but these are server cores operating above their peak efficiency point. Scale them down to 0.3 W and you'd probably still have something faster than a Gracemont core.

I was just pointing out how silly the 100x efficiency claim is if you look at it in an apples to apples comparison I.E. the things normal people do on a PC taking 1/100th the power.
I wasn't trying to directly address your wristwatch claim, because you do seem to be missing the point that efficiency isn't only about power-savings nor are perf/W curves linear. However, I think it's interesting to consider how close we might actually be. Again, ultra low-power isn't my area of expertise, so I can't address the question with the kind of attention it deserves.
 
  • Like
Reactions: rluker5
Zen 4 was tooted as a huge efficiency increase over Zen 3. In reality the 5950x is 50% (!!!!) more efficient than the 7950x. Overpromise, underdeliver.

efficiency-multithread.png
 
Zen 4 was tooted as a huge efficiency increase over Zen 3. In reality the 5950x is 50% (!!!!) more efficient than the 7950x. Overpromise, underdeliver.
That's more like 30% (you have to compare stock to stock) ,just because the AMD fanbois are over inflating everything doesn't mean you have to do it too. 30% is still a large enough difference to make your point about AMD pushing power and overclocking ryzen to the breaking point just to fall short of barely keeping up.
It seems like most people in this thread don't understand that:
  1. Efficiency is the ratio of work divided by the amount of energy needed to perform it. You can increase efficiency by increasing the numerator (i.e. the amount of work accomplished), decreasing the denominator (i.e. the amount of energy), or some combination.
    ...
    ...
    In a seemingly weird twist on #1, we've recently seen a lot of examples that simultaneously increase efficiency and power. Speaking of AMD, their Zen 4-based Genoa EPYC is more efficient than Zen 3-based Milan, while also using more power.
    ...
    ...
    rather than simply throwing more Watts at it (dishonorable mention: Intel and their upcoming 1.5 kW datacenter GPU).

So you are calling people out for not dividing power by performance but in the same breath you don't divide power by performance for intel, intel using 1.5Kw=evil...because science, I guess.
Doesn't matter what performance they will get, high power = higher bad because that's what you said in point 1 right?!
From the same article you linked:
"Intel itself once said that it would offer five times higher performance per watt and five times higher memory capacity and bandwidth compared to its 2022 products while also offering a 'simplified' programming model. "
That would be a 500% efficiency increase....which it won't be but it will be a lot more efficient than what 1.5kw can give you now.

And then you get upset when I point out that there is something wrong with your memory, you don't remember the beginning of your post by the time you end it...
 
Zen 4 was tooted as a huge efficiency increase over Zen 3. In reality the 5950x is 50% (!!!!) more efficient than the 7950x. Overpromise, underdeliver.

efficiency-multithread.png
Again, that's just because the 7950X pushes the cores outside of their efficiency window more than the 5950X did.

Above, I cited data showing the flagship Zen 4 EPYC (Genoa) is 38% more efficient than the Zen 3 flagship (Milan).
 
That's more like 30% (you have to compare stock to stock) ,just because the AMD fanbois are over inflating everything doesn't mean you have to do it too. 30% is still a large enough difference to make your point about AMD pushing power and overclocking ryzen to the breaking point just to fall short of barely keeping up.
I am comparing "stock" to "stock", since both setups are with XMP on. 3600 ram for the 5950x and 6000 for the 7950x.
 
So you are calling people out for not dividing power by performance but in the same breath you don't divide power by performance for intel, intel using 1.5Kw=evil...because science, I guess.
While it's not inconceivable Falcon Shores might have so much die area that 1.5 kW is still in its efficiency window, it seems extremely unlikely. I took a small, but I think reasonable and defensible leap, there.

And then you get upset when I point out that there is something wrong with your memory, you don't remember the beginning of your post by the time you end it...
If you think you see an inconsistency in something I say, it's fair to point out. However, an apparent inconsistency does not give you license to start in with ad hominem attacks. It could be (as above) that there's no real inconsistency, but simply an unstated assumption. However, you should ask and not presume to know.
 
Again, that's just because the 7950X pushes the cores outside of their efficiency window more than the 5950X did.

Above, I cited data showing the flagship Zen 4 EPYC (Genoa) is 38% more efficient than the Zen 3 flagship (Milan).
Oh, are you saying that the 7950x isn't inherently an inefficient CPU but it just has a way higher power limit than the 5950x?
 
can't provide the necessary insight to support or refute the claim.
Obviously. And it was done on purpose. I just find it very interesting that whenever someone does that with AMD they get called out, but nobody gets called out when doing that with Intel. This is something that psychology has to study, it's really not normal my man.
 
I took 100x and Su taking a shot at Gelsinger. Remember Gelsinger's promise of 1000x by 2025? I took this as Su saying "yeah we can throw big numbers around too, but we actually hit ours"
 
  • Like
Reactions: bit_user
Status
Not open for further replies.