News Battlemage G21 GPU spotted in Intel oneAPI code update

bit_user

Polypheme
Ambassador
I wanted to buy an Alchemist A770 to fiddle around with it for compute. However, I was put off by its high idle power and instead made due with the iGPU. I really hope Battlemage sorts out this problem and then I'm very likely to buy one.
 
  • Like
Reactions: thisisaname
Secondly, for whatever reasons its biggest GPU, the G21 is only designed to fuel mid-range mass-market style graphics cards. With this in mind, we will underline expectations that Battlemage isn’t going to go toe-to-toe with the best of the Nvidia Blackwell RTX 50 products, or AMD RDNA 4-based Radeon RX 8000 products.

They don't HAVE to go toe-to-toe with AMD or Nvidia for that matter. Even AMD isn't trying to compete with Nvidia at least on high-end GPU market, which the RTX 50 flagship series would be capturing.

Btw, Battlemage-G10 will be the bigger silicon, with Battlemage-G21 as the smaller chip aimed at entry-level systems.

If Intel can release highly competitive mainstream/mid-range "gaming" GPUs within a reasonable budget, then they can grab a lot of market share, unlike Alchemist which was a bit late for the party. Despite that, it's impressive to witness how much effort Intel's driver team has done in the past one year or so.

Current Alchemist A-series ARC discrete GPUs have seen a pretty decent performance uplift in most of the latest AAA/AA games, if not all. We just need more mainstream GPUs in a price bracket that a lot of gamers can afford, and not some highly expensive or a HALO product (thinks of RTX 5090 :sneaky:) .

The Arc A770 which sports a 16GB VRAM buffer/capacity is still considered a pretty decent gaming GPU based on it's price/performance ratio.

I also look forward to an update on the XeSS up-scaling tech as well, with new iterations, along with the rumored XeSS "ExtraSS" tech, which could be a frame generation technique, aka based on "Frame Extrapolation" instead of "Frame Interpolation".

Based on Intel's paper, Extrapolation method uses information beyond the bounds of the input sample to produce an approximation of the frame.

Although, Extrapolation might produce less reliable results, and add more artifacts, but we have seen similar issues with interpolation as well, so with a few tweaks and optimizations, XeSS "ExtraSS" could be a middle ground in offering good quality with higher FPS.

So I'm curious to learn how this pans out, and see this implement in future games as well, assuming Intel is going for this tech, and it is feasible to use. Nothing is official yet though.

https://asia.siggraph.org/2023/presentation/?id=papers_744&sess=sess155

Slight Off-topic:

BTW, the research paper itself also highlights the differences between Interpolation and Extrapolation, which seems obvious.

While Frame Interpolation generates better results, it also introduces higher latency when generating frames which is why NVIDIA and AMD have latency-reducing technologies such as Reflex and Anti-Lag required to deliver a smooth frame-generation experience.

Extrapolation doesn't produce very high latency, but has difficulties due to lacking key information to produce a new frame.

But the paper claims that ExtraSS aims to solve this by using a new "warping" method that can help produce better quality vs the previous frame generation methods and with lower latencies.

More info can be found here::

https://dl.acm.org/doi/pdf/10.1145/3610548.3618224
 
Last edited:

35below0

Commendable
Jan 3, 2024
1,482
623
1,590
They don't HAVE to go toe-to-toe with AMD or Nvidia for that matter. Even AMD isn't trying to compete with Nvidia at least on high-end GPU market, which the RTX 50 flagship series would be capturing.
This is true. Most of the GPUs sold are entry to mid tier. I think EVGA reported they lost money on 3090 and 3080 cards. The xx60 and xx50 market is the lucrative one.

One other thing Intel have to do is bribe slap Valve so they're accurately represented in the Hardware Survey.
It's unimportant buuut, should Intel rocket up the chart, it will make headlines.
 
  • Like
Reactions: thisisaname

Pierce2623

Upstanding
Dec 3, 2023
182
163
260
Honestly if they can hit 4070 performance at like $300, I’ll buy them for my friends that don’t have a GPU. The a750 might’ve sucked overall but not for $180 it didn’t.
 
  • Like
Reactions: thisisaname

Eximo

Titan
Ambassador
The old guesswork was 4070 like performance, but that was well before these latest batch of driver improvements. That was done by looking at relative performance and GPU size.

Leaks so far only reveal something akin to a A580 class GPU with 24 Xe cores, and the expected 64 Xe core G20 GPU hasn't been spotted anywhere.

Still should be just as interesting with performance being all over the place. Pretty sure I am going to switch to it just for fun, and see about water cooling for no reason.
 
Secondly, for whatever reasons its biggest GPU, the G21 is only designed to fuel mid-range mass-market style graphics cards.

Uh ? To clear up any confusion, the Battlemage-G10 will be the bigger silicon, with Battlemage-G21 being the smaller chip aimed at entry-level systems.

The Battlemage BMG-G10 should be the successor to the Alchemist ACM-G10 which currently powers the Arc A770 and Arc A750 GPUs.

BMG-G10, Enthusiast Grade offering, <225W SKU.

While the BMG-G21 could be a mid-tier performance SKU with <150W TDP.
 
Last edited:

Giroro

Splendid
Whatever is going on with Battlemage, they probably had to pivot their original plans when AI took off.
I think AMD had this issue as well, assuming accuracy of the rumors about weird die numbering and the cancellation of high end MMU cards. Although I think it may be more of a delay than a cancellation to rework how their AI works, and the first wave/midrange will probably be missing those improvements.
We might get a couple stop-gap cards this year, in an effort to better compete against Nvidia next year... or maybe they'll crunch and rush out some buggy/broken/unfinished products.

Hopefully both brands get big announcements at Computex in a few weeks. If Intel doesn't use that opportunity to present a solid roadmap, then I doubt Battlemage will happen this year.
 
  • Like
Reactions: thisisaname
There are already tools which were being used for verification and testing purposes for Battlemage discrete GPUs though.

Two test tools for Battlemage GPUs are listed on the Intel DesigninTools webpage. BGA2362-BMG-X2 and the BGA2727-BMG-X3-6CH (this link is currently not working).

The second chip is of a slightly bigger package size than the top Alchemist chip. The X2 tool features a 2362 BGA array, and the X3 tool feature a 2727 BGA array.

So I'm gonna assume that the bigger Batlemage die might come in the 2727 array, though this is just speculation on my part.

f7I943v.png


lSFmHy2.png
 
Last edited:

bit_user

Polypheme
Ambassador
There are already tools which were being used for verification and testing purposes for Battlemage discrete GPUs though.

Two test tools for Battlemage GPUs are listed on the Intel DesigninTools webpage. BGA2362-BMG-X2 and the BGA2727-BMG-X3-6CH (this link is not currently working).

The second chip is of a slightly bigger package size than the top Alchemist chip. The X2 tool features a 2362 BGA array, and the X3 tool feature a 2727 BGA array.

So I'm gonna assume that the bigger Batlemage die might come in the 2727 array, though this is just speculation on my part.

f7I943v.png


lSFmHy2.png
Thanks so much for the info!

Can you relate these pin-counts to Alchemist? I'm mainly curious what they might tell us about memory capacity and data width.
 
  • Like
Reactions: Metal Messiah.
RTX 4070/Ti level of performance under <$500 USD seems too overly optimistic to me though ! But if Intel can make it happen then that would be great. But they need to improve the power consumption with Battlemage silicon though.

The Alchemist silicon is not that power efficient overall. Although, recent drivers might have addresses some of previews high 'idle state' power draw issues, but in gaming/load the cards still sip power, at least when compared to Nvidia's RTX 40 series cards.
 

bit_user

Polypheme
Ambassador
in gaming/load the cards still sip power, at least when compared to Nvidia's RTX 40 series cards.
I think you mean "guzzle power". I've noticed WCCFTech misuse the metaphor of sipping, as if it's simply analogous to "consuming". It seems that author is unaware that to sip means drinking a small amount or slowly.

Eh, at least their English is better than my Urdu.
; )
 
  • Like
Reactions: Metal Messiah.
Off Topic.

Intel told partners last week that Ponte Vecchio is starting its sunsetting process. Instead, it seems like Intel is focusing its production capacity on the Intel Gaudi 2/ Intel Gaudi 3 and then ramping its converged HPC and AI Falcon Shores GPU for 2025.

This follows the company’s announcement that it was skipping Rialto Bridge for Falcon Shores.

As of last week, Intel Ponte Vecchio is moving into a new phase. Instead of hunting for new clusters, it is going to continue to be sold and filled into existing clusters.

Likewise, the Intel Xe architecture is important to the company, so Intel is still going to continue developing the software behind Intel Xe as it moves ahead to Falcon Shores, hopefully next year.

Text Source:

 
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
PS: Btw, we shouldn't be taunting others like that though. Nvm.
Just to clarify: I'm pretty self-aware, whenever I criticize the writing of someone who's a non-native English speaker, that I don't know any other language well enough to even attempt to write articles in it! That's what I was trying to allude to. It's like: "here, let me criticize your execution of this thing I couldn't do." So, definitely requires some humility.
 

bit_user

Polypheme
Ambassador
Intel told partners last week that Ponte Vecchio is starting its sunsetting process.
TBH, I was surprised they even tried to sell it on the open market! I thought it was going to end up being something of a one-off, for a handful of HPC customers. Without the current AI boom, I think it probably wouldn't be profitable for them to sell.

Instead, it seems like Intel is focusing its production capacity on the Intel Gaudi 2/ Intel Gaudi 3 and then ramping its converged HPC and AI Falcon Shores GPU for 2025.
Makes sense.
 
  • Like
Reactions: Metal Messiah.

TechyIT223

Proper
Jun 30, 2023
162
33
110
So will we see 8Gb Vram GPUs this time as well or Intel will set a minimum 16GB VRAM as the baseline??

If 16Gb is the minimum then the card won't be cheaper than what you guys are expecting. Thoughts??

I would expect Intel to make 12GB as the baseline memory for entry. Level battlemage GPUs. But hey who am I kidding 😂
 

Pierce2623

Upstanding
Dec 3, 2023
182
163
260
Whatever is going on with Battlemage, they probably had to pivot their original plans when AI took off.
I think AMD had this issue as well, assuming accuracy of the rumors about weird die numbering and the cancellation of high end MMU cards. Although I think it may be more of a delay than a cancellation to rework how their AI works, and the first wave/midrange will probably be missing those improvements.
We might get a couple stop-gap cards this year, in an effort to better compete against Nvidia next year... or maybe they'll crunch and rush out some buggy/broken/unfinished products.

Hopefully both brands get big announcements at Computex in a few weeks. If Intel doesn't use that opportunity to present a solid roadmap, then I doubt Battlemage will happen this year.
What? The AI/overall compute performance was the only thing Alchemist was already good at.
 

35below0

Commendable
Jan 3, 2024
1,482
623
1,590
Yeah. I don't think 8GB VRAM GPUs are going anywhere either though, despite the high memory usage demand of latest gaming titles.
Entry and low-mid level GPUs can keep 8Gb, if performance is good and price attractive.

There will not be many titles that require more than 8Gb, although that doesn't take into consideration mods. Modded games greatly exceed requirements.

I think if you need 12Gb or 16Gb, you will have to pay for it. Or make do with one fo the 8Gb GPUs.
One good reason to spend more on more VRAM is longevity, unless you're willing to change GPUs every odd generation. A majority of gamers do not upgrade so often, judging by how many older GPUs are still in use.
 
  • Like
Reactions: Metal Messiah.
Entry and low-mid level GPUs can keep 8Gb, if performance is good and price attractive.

There will not be many titles that require more than 8Gb, although that doesn't take into consideration mods. Modded games greatly exceed requirements.

I think if you need 12Gb or 16Gb, you will have to pay for it. Or make do with one fo the 8Gb GPUs.
One good reason to spend more on more VRAM is longevity, unless you're willing to change GPUs every odd generation. A majority of gamers do not upgrade so often, judging by how many older GPUs are still in use.

Totally agree with you. The debate of amount of VRAM required often turns into a confusing 'flame war' discussion on any forum, lol, but to be honest, it all depends on the type/genre of game one wants to play, and also on what settings and screen resolution.

Obviously high resolution refresh rate monitors will demand more VRAM at least in modern AAA games. Not to mention the optional high quality texture/HD packs as well. But older games can still do with less amount of frame buffer, but this still depends on the game engine and the video/graphic settings applied.

I mostly play OLD and indie/AA titles, so 8GB VRAM has been sufficient enough to game on 1080p/60Hz (my current monitor has support for up to 144-165Hz).

But having a much higher VRAM GPU means you are at least more future-proof because the next-gen upcoming AAA games will be more VRAM-hungry and demanding on the hardware. For AA and other indie titles, it's a different story though.
 
Last edited: