Those processor prices can't be real. So they will now sell the flagship at a lower cost than previous gen ryzen chips?
Probably if they were to add anything it would be to add more Ai/NPU not iGPU , and they aren't doing that initially either. Since both have better add-in options, kind of makes a good iGPU a subset of a small group of buyers for that market. 2CUs for desktop UI , especially if the add-in card is doing GPGPU/Ai workloads makes sense.
Beyond that the majority of people buying that segment likely would prefer to save a buck or two or have higher stable clocks if they had their choice.... edit and also not impact memory resources.
Little secret, there is no difference between a iGPU and a "NPU" processing wise.
There ... is... no ... difference...... which is why it makes more sense to add the NPU than the iGPU to a space limited die (even before considering diff in heat). It makes even more sense when it would be the only one in the market vs the myriad of options for integrated graphics in the desktop that can do a few tops and still rely on dGPUs for graphics or Ai.
When will you people wake up? They literally say this every year. Literally. Every. Single. Year.
Low end GPUs will just get faster.
There ... is... no ... difference...
This reminds me of the old Geforce vs Quadro debate,,.
Some recently spotted listings on BestBuy, hint that the upcoming Ryzen AI 300 Laptops are reportedly going to launch on the 15th of July, 2024.
There is a difference, but obviously you don't know enough about architecture to understand it, even when it's spelled out for you showing what areas aren't needed in an NPU, also showing the physical difference in size resulting from that. If you can't understand that, then showing you the added changes in XDNA2's layout vs the previous generation is pointless.
The silicon is then optimized by adding more cache or expanding the memory bus to truly ridiculously levels, with equally ridiculous prices.
Tap into exceptional performance, scalability, and security for every workload with the NVIDIA H100 Tensor Core GPU
Now your getting insulting.
..AGAIN, while the base vector processors may be similar an NPU doesn't need a bunch of things the iGPU requires, like much of the RBEs. So if it's a dedicated chunk just for Ai then you do away with the parts you don't need.
It's not, but to you it might seem so. 🙄It's the same silicon...
You realize that the "silicon" you refer to is broken up into a bunch of different highly specialized portions, right? An ALU is not the same thing as CU yet they are on "the same silicon..." @KnightShadey is telling you that an NPU is a highly specialized portion of silicon that excludes any part of the architecture usually afforded to a GPU to get more processing power for AI workloads...
It's the same silicon...
You realize that the "silicon" you refer to is broken up into a bunch of different highly specialized portions, right? An ALU is not the same thing as CU yet they are on "the same silicon..." @KnightShadey is telling you that an NPU is a highly specialized portion of silicon that excludes any part of the architecture usually afforded to a GPU to get more processing power for AI workloads.
You must have little to no understanding of how much silicon is dedicated to specific types of tasks in any given chunk of silicon. If the same amount of silicon mm^2 was dedicated to an NPU as to a GPU the NPU would be multiple times faster than the GPU at a given AI workload because an NPU sloughs off the fat of everything it doesn't need to do that specific type of workload. You are correct that a GPU can easily do AI workloads, but so called, "NPUs," do them either faster or more energy efficiently for the amount of silicon dedicated to said GPU or NPU. If you want an NPU to do anything other than an AI task it either cant, or a GPU is better at it. Your mind is so poisoned to specific marketing it is leaking into your understanding of the underlying technologies.The "NPU" is just a new name for GPU but with a 10x price tag increase. We have several in our datacenters we purchased for VDI desktop acceleration, prior to the renaming schemes. Was a really funny sales call we had the other month when they discussed how fortunate we were for buying those "AI accelerators". AI training algorithms are just regular vector instructions, mostly 16-bit mixed in with some 32-bit depending on the model. It's all CUDA style vector programs that get loaded into the GPU and crunch through massive quantities of data. We are used to thinking of that data in relation to flashy graphics on a screen, but could just as easily be Ethereum hashing or data relational hashing. It's the exact same processing profile.
Datacenter GPU's already have much larger amounts of memory and wider memory bus's.
But it's ok, I'm positive the crystal infused blockchain neural plasma processing warp cores will do everything you think they will.
Now, who's being insulting ? 🤨But it's ok, I'm positive the crystal infused blockchain neural plasma processing warp cores will do everything you think they will.
@KnightShadey is telling you that an NPU is a highly specialized portion of silicon that excludes any part of the architecture usually afforded to a GPU to get more processing power for AI workloads.
Think of an NPU like an F1 car for AI processing versus a GPU which would be like a BMW M8 competition.
@palladin9479 is a highly knowledgeable person, however, he seems to often get caught up in the labels of things and then puts whatever that is in the 'I dont care' section. I can acknowledge that AI marketing is a plague on the IT and consumer electronics sector while also caring to look at least deep enough to understand some of the nuts and bolts of the underlying technologies. The marketing buzzwords that are 'AI,' 'TOPS,' and 'NPU' are certainly being taken hostage by corporate marketing, but that should not detract from the underlying technology aspects that they preface. Some people see some of these terms in a title and immediately just shut down, and I don't even blame them at this point.Thanks for trying, he seems incapable of getting past the surface view of these and confuse function for form.
I was going to use a similar analogy using: NPU=Race Car , GPU=SUV , and CPU = 8x8 Huge RV (like the Mercedes).
While they all may be powered by a fundamentally similar V8 , the NPU race car is purpose-built to go fast under specific restricted conditions, the GPU SUV has more flexibility and can do more tasks under a wider variety of conditions at good speeds, and the CPU RV has just about everything... including the kitchen sink to do even more task under more varied conditions, but much slower.
And the H100 and MI300 are Jet-powered Semis at the fair.... both seen here doing music mashups (like 99 problems by Beach Boys) for There I Ruined it https://www.youtube.com/c/thereiruinedit ... 🫠
I understand that, but as this back & forth was initiated by his response to someone else's (my) thoughts on the subject in a thread with Ai in the title (even as just a model #), it seems he shouldn't be triggered by the discussion of that very topic to the point of ignoring the crux of the discussion. 🤔Some people see some of these terms in a title and immediately just shut down, and I don't even blame them at this point.
Except they have.What low end GPU's...
They haven't really been making them lately...