• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

News Intel Arc Xe2 Battlemage GPUs rumored to arrive next month — ahead of AMD RDNA 4 and Nvidia Blackwell

I wonder if this will be the last GPU we see from Intel for a while!? I figure it's just a matter of time that the GPU division, which can't be making money, is cut. The market is too hard to penetrate with nVIdia monopoly, and AMD even struggling to gain market share. I just don't see how this works for Intel in anyway.
They want some of the AI magic to make their share price go up.
 
For me the question will be how much better do these gpus with Intel's Xe2-HPG architecture perform when compared to the igpus built into the new Core Ultra 200 class cpus. If they won't perform substantially better than what I'll be getting when I buy a Core Ultra 9 285K then they won't be of any use to me.
 
I will buy, if it works in Linux and doesn't consume loads of power at idle, which was one of the main negatives about Alchemist for me. I didn't even mind the A770's performance, all that much, and it had all the features I wanted (AI, ray tracing, good OpenCL stack, AV1 acceleration, 16 GB of RAM). I just need something that isn't going to burn 40W of power just to show the desktop.
 
Last edited:
I wonder if this will be the last GPU we see from Intel for a while!? I figure it's just a matter of time that the GPU division, which can't be making money, is cut. The market is too hard to penetrate with nVIdia monopoly, and AMD even struggling to gain market share. I just don't see how this works for Intel in anyway.
The last discrete GPU we see from Intel for awhile, yes, I'm one to believe so given their 15% labor force reduction. It takes huge R&D funds to even somewhat compete with nVidia on dGPU's, and even AMD (even as they lose market share themselves). Intel has lost money from Arc Alchemist, and it's hard to see how they'll do much better than break-even for Battlemage when AMD and Intel are release new product soon as well. Battlemage's advantage is holiday sales if this release timeframe is true, but some to many will still wait to see how all three competitors stack up against each other.
 
  • Like
Reactions: Roland Of Gilead
I think Intel learned a very hard lesson with the Arc launch so it definitely makes sense to be quiet until releasing a product. In general I hope the recent launches are more indicative of how things will go where they acknowledge the existence but don't really talk about it until there's something real to show.

There's no indication that Intel is slowing graphics hardware development so the question will mostly what markets they play in. PTL uses Celestial for IGP so we know that the first 3 generations of Arc hardware are at least real. Whether or not this translates to anything outside of integrated solutions is anyone's guess as nothing has been said.

I can't imagine they'll release anything that ticks the fast/efficient/cost boxes I'd need to buy a new video card, but I'm hoping they release something that can make a splash in the market.
 
  • Like
Reactions: bit_user
Discrete GPUs will get a boost in sales if edge AI processing takes off ... and the AI processing benchmarks will be more important than the ridiculous gaming FPS.
 
I wonder if this will be the last GPU we see from Intel for a while!? I figure it's just a matter of time that the GPU division, which can't be making money, is cut. The market is too hard to penetrate with nVIdia monopoly, and AMD even struggling to gain market share. I just don't see how this works for Intel in anyway.

NVIDIA is not a monopoly. I wish people would stop saying that. They are a winner in a market of competition. Consumers have chosen what to buy. Not NVIDIA.

NVIDIA is reaping the rewards of smart decisions. Whether you like it or not.
 
Discrete GPUs will get a boost in sales if edge AI processing takes off ... and the AI processing benchmarks will be more important than the ridiculous gaming FPS.

I don't see why this wouldn't become a thing since it's something that is of interest in the industry. Everything doesn't have to be run from a cloud.

I just hope Intel doesn't give up prematurely. They should keep pushing on despite profits right now.
 
Discrete GPUs will get a boost in sales if edge AI processing takes off ...
A major problem it could face is how big the LLMs tend to be. One of the things that makes integrated NPUs so interesting is that they can work directly from main memory. 192 GB is enough to fit GPT-3 class models, I'm pretty sure. Desktops will soon get a bump to 256 GB, while laptops should handle at least 128 GB.

A dGPU with PCIe 5.0 could still potentially stream the weights out of main memory, but PCIe would probably bottleneck it to the point where it's not much faster than an iNPU.

That said, there are other AI models you could use that are small enough to fit in typical dGPU memory. So, if the focus isn't so much on LLMs, then dGPUs could remain an attractive option.
 
  • Like
Reactions: P.Amini
NVIDIA is not a monopoly. I wish people would stop saying that. They are a winner in a market of competition. Consumers have chosen what to buy. Not NVIDIA.

NVIDIA is reaping the rewards of smart decisions. Whether you like it or not.
Being classed as a Monopoly doesn't mean you got there by immoral or illegal methods. All it means is that you dominate a market.

There are rules and laws which apply to firms with market dominance, to help ensure they don't abuse their power. The definition of market dominance usually starts at a 50% threshold, but sometimes higher thresholds apply.
 
The last discrete GPU we see from Intel for awhile, yes, I'm one to believe so given their 15% labor force reduction. It takes huge R&D funds to even somewhat compete with nVidia on dGPU's, and even AMD (even as they lose market share themselves). Intel has lost money from Arc Alchemist, and it's hard to see how they'll do much better than break-even for Battlemage when AMD and Intel are release new product soon as well. Battlemage's advantage is holiday sales if this release timeframe is true, but some to many will still wait to see how all three competitors stack up against each other.
Yeah, I think so. With all the cuts and talks of takeovers (I know this is not happening), I just don't see how they can keep it afloat.

It's kinda sad though, as I was really hoping Intel would make some inroads in the GPU space. And, just when it seems like they may offer something competitive (or at least maybe Battlemage and beyond), they then go down in smoke. Back to just nVidia and AMD so.
 
NVIDIA is not a monopoly. I wish people would stop saying that. They are a winner in a market of competition. Consumers have chosen what to buy. Not NVIDIA.

NVIDIA is reaping the rewards of smart decisions. Whether you like it or not.
Calm the jets! :)

There's no 'like it or not' at play here. I'm simply pointing out, the high share of the market that nVidia exist in.
I have both nVidia cards and AMD cards, and have no snobbery against either company.. That Intel is dropping BMG, isn't gonna impact the market share that much at all. Probably in about the same way that Alchemist did.

A monopoly is a market structure with a single seller or producer that assumes a dominant position in an industry or a sector.
This pretty much sums up nVidia's current market position. Don't you think?
 
NVIDIA is not a monopoly. I wish people would stop saying that. They are a winner in a market of competition. Consumers have chosen what to buy. Not NVIDIA.

NVIDIA is reaping the rewards of smart decisions. Whether you like it or not.
I beg to differ. NVIDIA is a monopoly. Otherwise, they should let other GPU manufacturers to get access to CUDA ecosystem.
 
Looking forward to getting this, I don't think will try and compete with lower level gpus because they will already have them in their mobile units and the profits are not good at entry level but high end is worth the investment. If they never sell another desktop gpu then I'll have a collectors item. Can't loose with this purchase!
 
I will buy, if it works in Linux and doesn't consume loads of power at idle, which was one of the main negatives about Alchemist for me. I didn't even mind the A770's performance, all that much, and it had all the features I wanted (AI, ray tracing, good OpenCL stack, AV1 acceleration, 16 GB of RAM). I just need something that isn't going to burn 40W of power just to show the desktop.
I bought it as a Serpent Canyon NUC with an Alder Lake i7-12700H for €700.

At that price the A770m was basically for free while the same money only got you an RTX 4060 with the same 16GB VRAM!

As a deal, that was hard to resist. But idle consumption really killed it as a NUC based µ-server. Especially since it operates some of its display outputs in "Optimus" or hybrid mode, that is with the dGPU outputs being mirrored by the iGPU (e.g. all Thunderbolt/Alt-DP outputs) when active, while non-gaming 2D workloads would normally use the iGPU, too.

Or in other words, just like similarly designed notebooks, you should be able to actually turn the A770m completely off, zero Watts on desktop idle for the dGPU, hundreds of Milliwatts on the iGPU.

Not supporting that in a desktop GPU is somewhat normal, but in the mobile variant it's basically unexcusable and may be the reason notebooks based on the A770m were either never soled, quickly returned or just ditched.

I did get the A770m down to 20 Watts on idle, but my other NUCs make do with single digits: for the entire NUC.

For AI the sad personal experience is that anything but CUDA requires so much extra effort it's not worth the money savings. And even with the competition so far behind, AMD still looks less downtrodden these days.

I passed it on to family as a gaming rig and they are quite happy with it, because they only turn it on to game.
Pretty sure it will physically last forever, the build quality was as excellent as the original purchase price implied.

As a platform it's hard to recommend, even if they still sell them at the very same €699...
 
A major problem it could face is how big the LLMs tend to be. One of the things that makes integrated NPUs so interesting is that they can work directly from main memory. 192 GB is enough to fit GPT-3 class models, I'm pretty sure. Desktops will soon get a bump to 256 GB, while laptops should handle at least 128 GB.
The NPUs get integrated less because they can access main memory: GPUs can do that, too, specially the iGPUs.
The main reason for NPUs is processing efficiency, CPUs or GPUs might actually achieve similar performance (all limited by RAM bandwidth), but spend may more Watts. On a notebook, that kills the AI via an empty battery.

Large language models need large RAM, but also huge bandwidth as they basically comb throught all or at least a large portion of the weights: it's not your typicall HPC random data patterns, but an exhaustive pass on every token. If your LLM is two times bigger than another, the same RAM bandwidth means half the token rate.

At 128/192/256 GB of RAM, even if it's good for 100GB/s (average DDR5) or 200GB/s (quad LPDDR5) LLMs become very, very boring indeed (single digit token rates per second). And it doesn't matter at all if you're processing on an NPU, CPU or GPU, memory bandwidth is your only constraint at that point (not rumors, not just 2nd hand opinions, I tested, extensively).

16GB of VRAM at 500GB/s is pretty good on the A770m in the Serpent Canyon, only 50% less than the near 1000GB/s the 24GB of VRAM will do on my RTX 4090 and way more economical. That RTX has the ability to represent each weight at all the 2-16 INT and the various xFLOATy precision that current CPUs can't yet handle (another bonus of NPUs).

Intel and AMD GPUs might not trail Nvidia here, but it doesn't solve the problems: you're fighting against a quality wall with reduced precision and a memory wall with bigger models; RAM capacity is much cheaper to double than the bandwith demand that grows just as linearly and that's why everyone wants HBM to get to 4TB/s or a little more.
A dGPU with PCIe 5.0 could still potentially stream the weights out of main memory, but PCIe would probably bottleneck it to the point where it's not much faster than an iNPU.
PCIe v5 x16 and DDR5 aren't that far apart, but both are 40x slower than HBM. Large language models aren't compute limited, my 4090 is never uses more than 50% compute even when LLMs fit inside the 1000GB/s VRAM.

When your model gets bigger, even if only some layers go to RAM (I have 768GB on some of my machines), performance falls off a cliff and there is really no difference in CPU vs GPU inference speeds, no matter how many CPU or GPU cores you throw at the same memory bus.
That said, there are other AI models you could use that are small enough to fit in typical dGPU memory. So, if the focus isn't so much on LLMs, then dGPUs could remain an attractive option.
I cannot recommend buying any consumer GPU for AI work. If you treat it as a hobby or simply a new type of "text adventure" game, that's fine.

For a while consumer GPUs did actually allow you to surf the AI waves and enable quite a bit of experimentation. If you experiment with completely new ways of surfing or other domains of AI, GPUs might help you have some fun.

Consider that the competition there is ten thousands of wannabe PhDs, with high ambitions and no sane constraints on overtime.

But "the real LLM game" is with big boys who surf with boards the size of ocean liners on waves higher than sky scrapers.
 
I did get the A770m down to 20 Watts on idle, but my other NUCs make do with single digits: for the entire NUC.
I have a N97-based mini-PC that uses 8 Watts at idle, and it bugs me that it uses even that much. I think the M.2 drive and 32 GB DIMM each use a couple Watts, as does the fan, but I'm not really sure where the rest of the juice is going.

HardKernel claims their ODROID-H4 uses only 2 - 2.9 W at idle, though I'm not sure what RAM & SSD that was tested with. They claim that unplugging the Ethernet cable gets it down as low as 1.5 W.

 
  • Like
Reactions: snemarch