News AMD to Make Hybrid CPUs, Using AI for Chip Design: CTO Papermaster at ITF World

When it comes to the latest XDNA AI Engine (not specifically generative AI), the Ryzen AI co-processor might boost boost AI capabilities even on Ryzen consumer CPUs. But one major hurdle/red flag is the cost and overall value proposition of the chip, and using AI in consumer space makes little sense.

The major barrier to implementing Ryzen AI is the cost & that there has to be a good enough and actual reason to put Ryzen AI within budget chips & even desktop SKUs.

But adding Ryzen AI to a Threadripper chip with high core counts might be a good idea, even then, while it might be used for training purposes, it may not necessarily use it.

I think the main driving force that can overcome the value & cost barrier will be the "software". As software evolves and makes use of AI better and adds more value, then there definitely becomes a good reason to have dedicated AI hardware blocks on your chips. Otherwise not.
 
When it comes to the latest XDNA AI Engine (not specifically generative AI), the Ryzen AI co-processor might boost boost AI capabilities even on Ryzen consumer CPUs. But one major hurdle/red flag is the cost and overall value proposition of the chip, and using AI in consumer space makes little sense.

The major barrier to implementing Ryzen AI is the cost & that there has to be a good enough and actual reason to put Ryzen AI within budget chips & even desktop SKUs.

But adding Ryzen AI to a Threadripper chip with high core counts might be a good idea, even then, while it might be used for training purposes, it may not necessarily use it.

I think the main driving force that can overcome the value & cost barrier will be the "software". As software evolves and makes use of AI better and adds more value, then there definitely becomes a good reason to have dedicated AI hardware blocks on your chips. Otherwise not.
Great post as usual metal and I like your analysis and projection of where this is going.
 

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
761
1,760
AI has now been in a 10 year long hype cycle, and it's still dumb as a rock.

376×168 jpg
15,9 kB

Screenshot-2023-05-16-at-10-25-45-AM.jpg
 
  • Like
Reactions: sitehostplus

ezst036

Honorable
Oct 5, 2018
584
501
12,420
I'm glad AMD has officially decided to embrace big.LITTLE for future processors. It was kind of inevitable, and with big.LITTLE being now mainstreamed across the board with all consumer CPU producers this will further delay a post-x86 future.

I'm not entirely put off by it. I'd much rather move directly from x86 --> RISC-V than have to move to ARM and then move again to RISC-V at some point later. The RISC-V platform just isn't ready yet and it doesn't seem to be accelerating in our favor.
 

RichardtST

Notable
May 17, 2022
237
265
960
I'm *NOT* paying for useless little slow cores. Forget it. I do not want them. I do not want to be charged for them. They are completely worthless in every last one of my applications both at home and at the office. Give me my 16 fast cores and ah heck-off with the rest.
 

JamesJones44

Reputable
Jan 22, 2021
713
666
5,760
I'm glad AMD has officially decided to embrace big.LITTLE for future processors. It was kind of inevitable, and with big.LITTLE being now mainstreamed across the board with all consumer CPU producers this will further delay a post-x86 future.

I'm not entirely put off by it. I'd much rather move directly from x86 --> RISC-V than have to move to ARM and then move again to RISC-V at some point later. The RISC-V platform just isn't ready yet and it doesn't seem to be accelerating in our favor.
ARM to RISC-V is actually a smaller jump as they are both RISC based and share some basic instructions in their ISA. CISC to RISC is a much bigger jump technically.
 
I'm *NOT* paying for useless little slow cores. Forget it. I do not want them. I do not want to be charged for them. They are completely worthless in every last one of my applications both at home and at the office. Give me my 16 fast cores and ah heck-off with the rest.
If we went by TechPowerUp's assessment of the i9-12900K:
  • P-cores only provided 152% over the E-core only baseline
  • P+E cores provided 222% over the E-core only baseline
4 E-cores take up roughly the same die space as a P-Core, so even if you got rid of the E-cores and put P-cores in their place (which would bring you up to 10 P-Cores), doing some basic math, you'd only get about 190% better performance over the E-core only baseline. And I believe 4 E-cores also take up the same power as a P-core.

So if you'd rather have less performance overall, be my guest.

ARM to RISC-V is actually a smaller jump as they are both RISC based and share some basic instructions in their ISA. CISC to RISC is a much bigger jump technically.
Unless the application is using a bunch of micro-optimizations, the jump is as simple as switching the compiler flags.

The only people who really have to worry about this stuff are the system software developers. Application developers shouldn't have to worry about this.
 

Sam Bi

Commendable
Mar 7, 2021
7
1
1,515
Actually if done properly, the AI Core would be beneficial not so much to the applications directly, but rather to the entire processor by allowing it to learn about each application and its behavior and thereby creating the possibility of better branch predictions, overall power utilization as well as better overall security.
 
May 17, 2023
3
1
15
Is it definitely a CTO? Because he talks like a CTO.
That's it: different cores, different processors, different sets of blah blah blah...
This is what AMD and Intel are doing now, confusing consumers and cheating them out of their money (zen2 and 5000 series APUs, etc.). Only companies win here.

Although I partially agree that processors will be more adaptable to applications.
You can develop in the direction of hardware accelerators (e.g. FPGA, coprocessor).
Do not do everything on one crystal, because it will be something that some people will pay for in vain without using, and some will not have enough.
1) For example, everyone uses the Internet. Why not speed it up in hardware, especially since more and more applications use the browser engine. You can make an FPGA to load the engine and process http/s + TLS. If you want money so badly, you can make the engine upgrade paid (these are programmable circuits). This also includes page rendering.

2) A separate chip for video encoding/decoding (various new codecs). You can also make the update paid. This can be integrated with a separate chip on the GPU or a chip on the CPU.

3) AI, this should definitely be a separate chip, and in general, preferably a separate expansion card, such as pci-e x16 or m.2. Depending on the requirements. Also, the upgrade can be made either paid or a replacement for such separate discrete cards can be sold.

That's what comes to mind for now. And all this will be more energy efficient than even 100500 small cores.

Processors should also be divided into small-core (2P core, 16 e-core, etc.) and large-core (16P core, 2-8 e-core, etc.).
Finally, introduce 10Gbe to the consumer segment. 20 years ago, many mobos already had 1Gbe, I don't need neutered 2.5Gbe in 2023+.

This way, there is no need to engage in marketing s*** and make another s***.
This way, users will be satisfied, managers will earn money all the time, and there will be a unique provision of a segmented market.

What has happened over the past 10 years to create so many fools in the management of large companies?
 

Johnpombrio

Distinguished
Nov 20, 2006
248
68
18,770
It has been decades since AMD, Intel, etc have used manual mapping of their CPU layouts. They have a vast library of highly sophisticated tools to lay out, mask, and manufacture their present crop of products. I would call these tools "AI-driven" for years. AMD here is just buzzwording a continuation of these tools.
 
  • Like
Reactions: Amdlova and -Fran-
ARM to RISC-V is actually a smaller jump as they are both RISC based and share some basic instructions in their ISA. CISC to RISC is a much bigger jump technically.
This would be true if it were the early 90s. Over the last 20-30 years CISC has become more RISC and RISC has become more CISC. In the end both RISC and CISC are basically hybrids of each type now and more like each other now than they were in the 90s.
 
  • Like
Reactions: MetalScythe

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
761
1,760
That seems more like a string error. Looking for "e" rather than e inside of a another string.
My God. If billion $ AI tools like Google's Bard can not tell what the letter "e" means, because it doesn't understand what quotation marks are, then AI is far dumber than I even imagined possible.
 

Eximo

Titan
Ambassador
Just a matter of training the AI model to look for that text pattern and the context in which it applies.

The quotes are really for the human anyway.

For all we know it could interpret "e" as the symbol for energy and respond "Yes, ketchup contains e" and be completely right in context but wrong in concept.
 
Also pointing out an edge case where AI falters and then claiming it sucks is a strawman at best. Try asking it something that would take a human a reasonably longer amount of time to figure out and see what the AI comes up with.

I mean, can I call you a blithering idiot if you can't answer a simple question because your brain farted or something even though you could say, plan out several routes with alternate paths through New York City in your head?
 
  • Like
Reactions: MetalScythe
I'm *NOT* paying for useless little slow cores. Forget it. I do not want them. I do not want to be charged for them. They are completely worthless in every last one of my applications both at home and at the office. Give me my 16 fast cores and ah heck-off with the rest.
If you have heavily-multithreaded applications that can fully make use of a 16-core, 32-thread processor, then they should be able to spread their workload well across efficiency cores as well. Those kinds of workloads split their tasks up into lots of tiny chunks, so it doesn't really matter much whether the chunks are being processed by a faster or slower core. Even if a performance core could process twice as many chunks of data as an efficiency core in a given amount of time, if two efficiency cores could process the same amount of data while using less power and producing less heat, then they could be the better option, and if they require less silicon to achieve the same result, they could potentially provide that level of performance at a lower cost.

And if you don't often make use of highly parallel tasks like that, then you probably don't need 16 performance cores, and even 8 cores might be overkill. Most desktop applications and games are not very well threaded, as many processing operations can't really be divided up across multiple cores. You might have one or two threads pushing a core to its limits and holding back performance, while the majority of threads are barely utilizing the cores they are on. And that's exactly why asymmetric cores can make a fair amount of sense. If you can have those less-demanding threads run on smaller, more efficient cores, then that can potentially free up resources to allow the performance cores to make more performance available to the handful of demanding threads that are what's limiting performance of the application. It would arguably be a bigger waste if you are paying for 16 big performance cores, just to have a couple of them getting fully utilized while the rest are barely doing anything.
 

Soaptrail

Distinguished
Jan 12, 2015
302
96
19,420
When it comes to the latest XDNA AI Engine (not specifically generative AI), the Ryzen AI co-processor might boost boost AI capabilities even on Ryzen consumer CPUs. But one major hurdle/red flag is the cost and overall value proposition of the chip, and using AI in consumer space makes little sense.

The major barrier to implementing Ryzen AI is the cost & that there has to be a good enough and actual reason to put Ryzen AI within budget chips & even desktop SKUs.

But adding Ryzen AI to a Threadripper chip with high core counts might be a good idea, even then, while it might be used for training purposes, it may not necessarily use it.

I think the main driving force that can overcome the value & cost barrier will be the "software". As software evolves and makes use of AI better and adds more value, then there definitely becomes a good reason to have dedicated AI hardware blocks on your chips. Otherwise not.
No home use? How else are you going to make disinformation photos and videos at home to influences elections without your own AI processor? /sarcasm
 

waltc3

Reputable
Aug 4, 2019
424
228
5,060
Papermaster is great with this kind of PR...really good stuff. But...wake me when the products arrive....;) That's where the rubber meets the road. AI is still hyped way too much--it's all garbage-in/out as the case may be.