News Intel Arc B580 Battlemage GPU allegedly surfaces on Geekbench — with 20 Xe Cores, 12GB of VRAM, and 2.85 GHz boost it falls short of the A580 desp...

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Lunar Lake's integrated 8 core Battlemage is reported as providing 64 TOPS INT8, with 40 TOPS being the performance that MSFT specified for their Copilot+ certification. Looks like the B580 should provide around 213 TOPS. (2.8/2.1) clock x (20/8)cores x 64 lunar lake TOPS. Should provide a nice speed-up for models that use INT8 parameters.
 
MSFT and AWS have already announced fab deals for Intel 18A. There are at least 2 other unnamed 18A fab deals mentioned by PG... plus some unnamed advanced packaging deals.

Intel already offers some advantages with BSPD and GAA transistors on 18A, with both Panther Lake and Clearwater Forest sampling this quarter.

Co-packaged optical chiplets and glass substrates are probably two to three years out. Probably 15 years of R&D already paid for on those, so would be a mistake not to see what they can attract with all that.

Intel is already building and packaging multi-tile Xeon chips on Intel-3, and one of their AWS contracts is for a custom Xeon on Intel-3, so the "Intel will fail on EUV" predictions have lost, and the double down seems to be "Intel will fail on 18A".

AWS and MSFT have likely been among those building 18A test wafers over the last year, so their recently announced fab deals suggest good progress.
 
  • Like
Reactions: snapdragon-x
Given how different the performance of GPUs is under the Mesa Galliun drivers compared to Windows drovers... well I just dob't care what it's OpenCL performnce is in Windows is since I won't be using it there.

I'm getting good use out of my Nvidia GTX1650 still but given how the Intel Xe runs in my 11th Gen notebook (and with the 1115G4 which has a cut down GPU at that)... well that's not a quick gpu but it's also constrained to like a 5 watt TDP or so and shared memory... just the linear scaling of higher clockspeed and more units would be plenty of power for me (let alone architectual improvements and speedup from dedicated VRAM). Getting a GPU evenmtually with 12GB VRAM at a good price instead of the 4GB I have now would be lovely.
 
  • Like
Reactions: snapdragon-x
Considering AI is basically eating Intel's lunch (Nvidia has gone from being worth less than Intel prior to 2020, to being worth 30X as much as Intel, in terms of market cap at least), I don't think Intel can just pretend GPUs aren't important. Intel needs some changes if it's going to stay relevant.

I'm not saying GPUs alone are the solution, but Intel ignored GPUs for 20 years and it's now paying the price. Or rather, it dabbled in GPUs a little bit (i.e. Larrabee) but was afraid it would hurt the CPU division. And now GPUs are indeed killing the CPU division... just not Intel's own GPUs.

Look at how many Chinese startups are getting major state funding to try and create competitive GPUs for AI. If China also sees this as important, why wouldn't Intel come to similar conclusions? And sure, Intel could go more for AI accelerators, but the point is it can't just give up on the non-CPU market, and GPUs are a good middle ground as Nvidia has proven.

But are they really GPU's when used for AI and other non graphical tasks?
That is why I think Intel should concentrate on other things besides GPU's, as they are not and never have been competitive in the discrete GPU market. They are however at the top for integrated graphics.
 
But are they really GPU's when used for AI and other non graphical tasks?
That is why I think Intel should concentrate on other things besides GPU's, as they are not and never have been competitive in the discrete GPU market. They are however at the top for integrated graphics.

A lot of the parallel processing and matrix calculations needed apply both to AI acceleration and graphics rendering. A lot of AI is also applied to graphics in general, so pixel shaders are valuable for that too.

Also a market for video encoding cards. If the lower end cards are cheap enough, they will make decent companion cards. Many people already use A750 and A580 cards for exactly that.

And, as you say, they still have their own internal market for Xe cores, so ceasing development makes no sense.
 
  • Like
Reactions: JarredWaltonGPU
A lot of the parallel processing and matrix calculations needed apply both to AI acceleration and graphics rendering. A lot of AI is also applied to graphics in general, so pixel shaders are valuable for that too.

Also a market for video encoding cards. If the lower end cards are cheap enough, they will make decent companion cards. Many people already use A750 and A580 cards for exactly that.

And, as you say, they still have their own internal market for Xe cores, so ceasing development makes no sense.

Steam hardware survey provides a good overview of the GPU market share. When it comes to consumer market share, Intel is way behind with no realistic way to increase their position.

It reminds me of the days of Commodore, when they would bring out model after model to try and gain customers, and eventually ceased to exist...
 
  • Like
Reactions: JarredWaltonGPU
Steam hardware survey provides a good overview of the GPU market share. When it comes to consumer market share, Intel is way behind with no realistic way to increase their position.

It reminds me of the days of Commodore, when they would bring out model after model to try and gain customers, and eventually ceased to exist...
Steam is focused on gaming, so not sure what exactly would show up with multiple GPUs.

Regardless. They only have had one, recognizably bad, launch of underwhelming hardware. I wouldn't expect them to have market share.

With the exception that the older Commodore systems were one of the most successful home computing systems of their time. They failed to keep up with the trends and released their x86 clone too late. Not to mention having no easy way to convert the Commodore/Amiga software to run on x86.

Intel hasn't had a successful discrete GPU, so the comparison falls a little flat. They are trying to enter an existing market and theoretically before all this stock price and leadership changes had the ability to absorb some losses to do so. Sounds like that is still the plan.

My understanding is they are not going to pursue the discrete mobile GPU market. Which makes sense, harder to convince laptop makers to slave their boards to a market failure against Nvidia's overwhelming presence there. Even worse than AMD vs Nvidia in the discrete desktop market.
 
My understanding is they are not going to pursue the discrete mobile GPU market. Which makes sense, harder to convince laptop makers to slave their boards to a market failure against Nvidia's overwhelming presence there. Even worse than AMD vs Nvidia in the discrete desktop market.
Yeah, I suspect the mobile market strategy is basically "make our integrated GPUs fast enough that people don't need dedicated GPUs." Lunar Lake is a good step in the right direction. Now take a cue from Apple and double down on that iGPU for a higher spec model.
 
  • Like
Reactions: adbatista