• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

Review Intel Arc B580 review: The new $249 GPU champion has arrived

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I'd say that if Intel did that, they'd increase their own costs while failing to significantly increase their own sales since like you said, nVidia users won't care. Although I agree with the VR market, maybe it was considered too small a market to pursue with more VRAM.

But Intel needs to do something to carve out a user base, and pricing itself $50 cheaper than nVidia's option and offering only 4GB more VRAM isn't the way to do it, not with new cards on the horizon. It needed to be a 16GB card and for $199. Yes it'd lose Intel money most likely, but they're pushing against both a year long string of articles focusing on bad drivers and performance as well as a very small percentage of buyers who would actually consider buying outside of their established camp, but it would gain them market share, a user base, and positive word of mouth.

AMD can't break into nVidia's market share, mostly because AMD prices their products too close but also because their software was/is such garbage that it chased off people like me who swore they'd never go nVidia, what chance does Intel have unless they do something drastic?
 
how the B580 compares to the RTX 4070, which is the tech equivalent.
I'm not sure what "tech equivalent" means, but if you're talking process node they're not because Battlemage is using N5 whereas Ada is using a custom TSMC node nvidia referred to as 4N (no real public details of what optimizations specifically).
So using very similar hardware assets in power and hardware resources this two year old card consistently delivers about twice the performance when deploying its DLSS muscle while it also costs twice as much.

...

In terms of technology the two year old RTX 4070 outclassing Intel at 200% performance isn't just a monument of shame.
The 4070 doesn't deliver twice the performance or anywhere near it unless you're purposely tilting the scales. This is utterly dishonest and undermines whatever point you're trying to make with your entire post.
 
Something that is nice about the Battlemage cards and their low price to perf ratios is that they have good potential to rebalance pricing in the market.
If Nvidia releases an 8GB 5060 and people know that it won't do very well in a lot of games they may have a hard time selling it for $350 with the B580 in the market. Likewise with AMD and the used market. If the Nvidia and AMD cards are almost here then it is already too late for the 8GB midrange/lower end models.

Some bring up less than perfect drivers as a dealbreaker, but these less than perfect drivers are still better (in the occasional bad game+ corresponding time) than running a dGPU that doesn't have enough vram for the game.

The B580 isn't perfect, but it is pretty solid all around. It seems like it will be tough to just ignore that it exists and keep the prices of less capable competitors higher.

I also preordered an LE on reveal day (edit: first preorder day) and Newegg gave an estimated ship day of the 18th so hopefully it shows up not too much later. And it will be a real treat if that new software overclocking applies to my A750. I had completely given up on ram overclocking on that thing.
 
Last edited:
Looking for an upgrade for my RX480, and this might just be the answer. Still, I'll wait six months for the market to cool down and availability to increase. Most of all, I want the reviews!
 
Thank you for the review Jarred, especially for bringing in newer titles, please keep Space Marine 2 as it looks great while pushing both the CPU and the GPU. You can drop COD though 😬 maybe replace it with Alan Wake II.

Intel has done a brilliant job with B580, I’m pleasantly surprised, outperforming the 4060 easily while being much cheaper and offering more VRAM makes this a no-brainer choice for anyone considering 4060 or 7600XT.
 
We have a style guide that basically says most company names that aren't acronymns get written with a single capital letter. So Nvidia and Asus, but AMD. Gigabyte and many other companies do all caps as well, and we as journalists think it looks ugle. 🤷‍♂️

As for waiting, there are rumors AMD will have a budget/mainstream part in January, or at least announce it. It might cost $300, sure, but if it's 30% faster than the 7600 XT? That would be worthwhile.


The use of RT in games is also an important factor. Even though I selected some heavier games, none (other than Minecraft) are full path tracing and so aren't hitting the RT as hard as possible. But RT is very overhyped in games still, IMO.


Yes, this is true. We don't know how AMD, Nvidia, Intel, Apple, etc. count transistors. There's no official way to do so. What we do know is that RTX 4060 isn't that far behind B560, despite being nearly two years old, and it has a far smaller die and uses 20~30 percent less power.


Time constraints, sadly, but there rea professional tests showing transcoding, AI, 3D rendering, etc. on page six.
Can't you count those transistors yourself?
 
Even though the performance is in the 4060 area, I think Intel blew it by the 12GB VRAM. While it is more than Nvidia offers, I think they should have put on 16GB like the A770, even if it did raise the price, because I don't foresee any nVidia users choosing a B580 over an RTX 4060 (price difference too small), and would have opened it up more to markets which perhaps are more VRAM limited, such as VR. You could say it would then put it into competition with the 4060 Ti, which is about 20% faster, but is also far more expensive.
That would be a horrible idea. Raise costs without doing anything that would actually increases sales?
 
AMD cards have the same number of ray accelerators as compute units don’t they? That would put it at 32. Yep I just checked and Nvidia matches the SM count just like AMD matches the CU count so it’s 24 to 32, NOT 24 to 54.


Regardless the “better” architecture is the one that gives the most performance per dollar. Period. Alchemist outperformed Ampere in ray tracing too.
Indeed, I looked at the row for the 7700XT by mistake.
 
I'm not sure what "tech equivalent" means, but if you're talking process node they're not because Battlemage is using N5 whereas Ada is using a custom TSMC node nvidia referred to as 4N (no real public details of what optimizations specifically).
Tech equivalent for me means primarily equivalent resources turned into performance, which in a GPU design is mostly the RAM in size and bandwidth and and the electric power to burn.

There the B580 and the RTX 4070 get to use the same means, 12 GB of 192-bit VRAM at ~500GByte/s and 200 Watts of power but wind up in a very different performance class.

Pretty near all tech reviews of the B580 chose to stick with the 'Intel recommended price equivalent' RTX 4060, which delivers somewhat less performance but uses a 128-bit bus instead of 192-bit, 277 GByte/s instead of 500 GByte/s bandwidth, 8 instead of 12 GB of RAM and 115 Watts of power instead of 200, so siginificantly less resources for results that punch much higher than the linear equivalent of the resources given.
The 4070 doesn't deliver twice the performance or anywhere near it unless you're purposely tilting the scales. This is utterly dishonest and undermines whatever point you're trying to make with your entire post.
I made a mistake there, since the only one who compared the B580 against an RTX 4070 was Phoronix (and the first review to come out at all), I looked at that Phoronix overall results, where its 200 vs 96, which qualifies as "double" but failed to notice that he compared it against an RTX 4070 Super, which gets 220 Watts instead of 200 and a slightly better bin of the same chip. But even at 180 vs 96 it wouldn't be off by far.

Too many cards, too easy to miss, I'm sorry for that mistake!

Yet it doesn't change the overall picture much, in terms of technology the B580 proves how woefully behind Intel is compared to NVidia, which sounds like "envy" in Spanish for good reasons.

Intel may try to save face by offering it at that price, but current listings in Europe are at €325 which isn't €250.

The RTX 4070 is €550, the RTX 4070 Super is €650 and you can choose which one you want to compare it to.

The first may fall slightly short of "double" the second hits it per Phoronix, but perhaps Linux isn't the same story as Windows, even if it's the same games.

I consider throwing "dishonest" at me rather harsh and would argue that it is Intel who is trying very hard to tilt the scales.

And unless that price stays at half of what NVidia charges for a technically similar card, converting the same Wattage into significantly less performance is an uneasy decision to make. And that's not counting software aspects.

Of course, if it's good enough, it's good enough, at least you can't fall into the trap of buying more than you need within the Battlemage family currently.

But from a tech angle, Battlemage remains just shockingly bad, very much a Bulldozer vs a Core. And that means Intel stays far away from being a serious contender, which I'd like it to be.

But does that matter? Especially if energy efficiency isn't that high on your list?

It mostly depends on if Intel can make enough money from Battlemage to continue and perhaps become more efficient, too.

The price of the external resources, VRAM, PCBs etc. and power is the same for all, the ASIC the main differentiator. B580 and RTX 4070 don't differ vastly in transistor count, 19B vs. effectively 25B for the 4070 bin, but even if Intel could get theirs for half the price that NVidia has to pay, that doesn't mean they can maintain the advertised price gap if NVidia decides to squeeze a bit. Nor does half price seem very likely, not after Pat got in a spat with TMSC's founder.

If NVidia were to lower prices ever so slightly, Intel would turn from blue to red numbers while NVidia is still raking it in and we all know who can afford what.

I bought an A770 perhaps six months after launch, because I wanted to replace a GTX 1060 in a 24x7 box that is crammed with storage and needs something compact and low-power. That A770 mostly just failed to work with my dual DP KVM or DP generally: only HDMI was ok, so I sent it back after benchmarking and got the compact and efficient PNY RTX 4070 instead, which just works fine, but unfortunately at twice the money.

Perhaps a year later I bought an A770m in a i7-12700H NUC when that NUC became so cheap the A700m with 16GB VRAM and 500GB/s bandwidth was basically included for free (€700 for the Serpent Canyon). At that price it just wasn't a risk, more of a freebie.

It turned out ok for gaming, and it's still used in the family.

This summer I got a Lenovo LOQ ARP9 laptop for €750 that includes an RTX 4060m as well as a Rembrand-R 8-core Zen 3. This laptop tends to use half the Wattage at the wall the NUC uses, yet runs even Unreal 5 games like ARK Survival Ascend enjoyable enough with software magic, where the Serpent Canyon simply fails to exceed single digit FPS, perhaps only because ASA doesn't support XeSS, but who knows?

I'd say I try my best to give Intel a fair chance, in CPUs and GPUs. And if they fail it typically means I have to spend more, so why would I want to tilt scales?

If the B580 really drops to €250, I might get one to test, even keep if, it gets the job done. But if I'll be able to get another RTX 4070 for not that much more, used or whatever, I'd pick that, because when the price for Nvidia is right, the value is better. And with all those boxes in the home, efficiency is money, too.
 
Yes, this is true. We don't know how AMD, Nvidia, Intel, Apple, etc. count transistors. There's no official way to do so. What we do know is that RTX 4060 isn't that far behind B560, despite being nearly two years old, and it has a far smaller die and uses 20~30 percent less power.
Yes but I think TBH as intel is traditionally not good (bad) in GPU, but getting the bang for buck is more important to vast majority of people, when one games, the 100W saved don't cope with the service life of a card
 
Tech equivalent for me means primarily equivalent resources turned into performance, which in a GPU design is mostly the RAM in size and bandwidth and and the electric power to burn.

There the B580 and the RTX 4070 get to use the same means, 12 GB of 192-bit VRAM at ~500GByte/s and 200 Watts of power but wind up in a very different performance class.
That seems like a really weird set of metrics to use. Let me flip that on its head the RTX 3080 with 10GB of 320-bit VRAM at ~760GB/s uses 100W more than the RTX 4070 Super for less performance. Turns out different video card designs on different nodes are different? The 40 series has a very optimized design with their own custom node which combined goes a long way. They're quite a bit more efficient than AMD's parts as well (7700 XT is a 245W part and certainly isn't competition for the 4070 let alone 4070 Super despite having a higher TDP than either).
Pretty near all tech reviews of the B580 chose to stick with the 'Intel recommended price equivalent' RTX 4060, which delivers somewhat less performance but uses a 128-bit bus instead of 192-bit, 277 GByte/s instead of 500 GByte/s bandwidth, 8 instead of 12 GB of RAM and 115 Watts of power instead of 200, so siginificantly less resources for results that punch much higher than the linear equivalent of the resources given.

I made a mistake there, since the only one who compared the B580 against an RTX 4070 was Phoronix (and the first review to come out at all), I looked at that Phoronix overall results, where its 200 vs 96, which qualifies as "double" but failed to notice that he compared it against an RTX 4070 Super, which gets 220 Watts instead of 200 and a slightly better bin of the same chip. But even at 180 vs 96 it wouldn't be off by far.
Phoronix also testing Linux means there's a much bigger set of variables than just the cards. Guru3d still uses Hitman 3 and if you look at their 1440p Ultra results for the B580, 7600 and 4060 you'll see only the 7600 is similar to the Phoronix results. The 4070 is ~60% faster than the B580 in those results compared to ~96% with the Phoronix review. Based on TPU's results at 1440p the 4070 is around 55% faster than the B580 overall which given the price disparity isn't a bad place to be.

Also the 4070 Super has a significantly higher core count (~22%) than the 4070 so I wouldn't consider that to be "a slightly better bin".
The price of the external resources, VRAM, PCBs etc. and power is the same for all, the ASIC the main differentiator.
The 4070 uses GDDR6X as well as having room for more memory which means the PCB and memory power delivery will cost more as well.
I consider throwing "dishonest" at me rather harsh and would argue that it is Intel who is trying very hard to tilt the scales.
It's an accurate assessment because you chose to ignore plenty of reviews which were already out that massively dispute your claims. You're still trying to make links that aren't as simple as you're making them out to be.
If NVidia were to lower prices ever so slightly, Intel would turn from blue to red numbers while NVidia is still raking it in and we all know who can afford what.
I'm not sure why a what if even matters at all. They're not threatened by a good value card, because if they cared about the sub $300 market at all the last two generations lower end would have looked different. It won't be until there's a threat to a market that they do care about that we'll maybe see some price shifting. As long as the AI market is thriving I'm not sure even then.
But from a tech angle, Battlemage remains just shockingly bad, very much a Bulldozer vs a Core.
Based on what? Your arbitrary comparison points which don't actually make any sense?

Intel seems to have come very close to matching AMD's perf/watt in this class of card. I don't think it's particularly reasonable to expect much more than that out of a second generation part. Intel's biggest power issue is actually the hoops required to jump through to knock idle power down to a reasonable amount.

At the end of the day if you have $300 (EU pricing aside as it's high enough the 7700 XT is a close enough alternative) or less to spend on a video card the B580 is the only card that makes sense unless it does not work well with whatever you're doing. This is probably why the only B580s you can buy in the US right now are very overpriced ones as everything else is out of stock.
 
Yes but I think TBH as intel is traditionally not good (bad) in GPU, but getting the bang for buck is more important to vast majority of people, when one games, the 100W saved don't cope with the service life of a card
Another interesting way of looking at this is die size. RTX 4070's AD104 die on the 4N node (tuned N5, basically) has "35.8" billion transistors in a 295 mm2 die area. BMG-21 has "19.6" billion transistors in a 272 mm2 die area. Based on that, they're pretty similar in size and could ostensibly have similar active transistor counts. That or Nvidia has a way more dense solution.

And it's telling that, while B580 might beat up on the 159 mm2 AD107 die used in the RTX 4060, Intel isn't even pretending it can compete with AD106 and the RTX 4060 Ti (especially 16GB version). That's only a 188 mm2 die, so however you want to slice it, Nvidia's chip gets better performance out of a much smaller die.

Like, even if 4N is 10~20 percent higher transistor density than N5, the facts are that AD106 delivers superior performance by all indications. Shrink BMG-21 by a hypothetical 20% and it would still be a 218 mm2 die size. That's about the maximum advantage I'd expect from 4N relative to N5, and it's probably closer to 10% realistically. So Intel needs a about a 245 mm2 equivalent node die size to beat Nvidia's 159 mm2 chip.

We can also estimate how many chips per wafer each company gets. For Nvidia, it's around 362 AD107 per wafer, or with a wafer price of $14000 (guesstimating), $39 per chip. LOL. For Intel, it's 210 chips per wafer, and if the wafer cost is the same that works out to $67 per chip.
 
Even though the performance is in the 4060 area, I think Intel blew it by the 12GB VRAM. While it is more than Nvidia offers, I think they should have put on 16GB like the A770, even if it did raise the price, because I don't foresee any nVidia users choosing a B580 over an RTX 4060 (price difference too small), and would have opened it up more to markets which perhaps are more VRAM limited, such as VR. You could say it would then put it into competition with the 4060 Ti, which is about 20% faster, but is also far more expensive.
Sorry, but I strongly disagree on this one. 16GB on a budget $250 GPU is basically stupid. That either requires clamshell memory and a 128-bit interface (4060 Ti and 7600 XT), or else a 256-bit interface (like A770 and RTX 4080).

If you look at those clamshell cards, because of the lack of raw bandwidth you end up with higher performance from RX 6750 XT (which also has more raw compute). You need to scale VRAM bandwidth and capacity with compute, and there's nothing that suggests B580 needs 33% more bandwidth or capacity for gaming.

Memory interfaces don't scale well at all with the latest process nodes, so you'd get a much larger chip even if everything else stays the same. I'm sure B770 will be 256-bit and 16GB, and will also be 32 Xe-cores and thus have a reason for the additional bandwidth and capacity. Let's hope Intel does this with a price of $349 or less.

There are markets like AI where 16GB would have been nice to have, sure. But for a mass produced part designed to gain market share, I think Intel nailed the specs. 192-bit and 12GB is the sweet spot. You can easily do 1440p ultra, with GDDR6 memory on only one side of the PCB.

I'd wager AMD's RX 8600 will either follow the same path, or perhaps get GDDR7 with a 128-bit interface and 3GB chips for 12GB total. RTX 5060 will probably be 128-bit and 3GB chips as well, for 12GB. And bandwidth will also be similar maybe to GDDR6 on 192-bit (possibly higher, depending on the GDDR7 clocks they can realize).

But any new GPU needs 12GB to handle up to 1440p with room to spare in most cases. 16GB is needed for a 4K card, and no $249 part is realistically targeting native 4K. Ever. By the time we have $249 card with 16GB, we'll probably have games that mean such cards will only manage 1080p and maybe 1440p still. 😛
 
Another interesting way of looking at this is die size. RTX 4070's AD104 die on the 4N node (tuned N5, basically) has "35.8" billion transistors in a 295 mm2 die area. BMG-21 has "19.6" billion transistors in a 272 mm2 die area. Based on that, they're pretty similar in size and could ostensibly have similar active transistor counts. That or Nvidia has a way more dense solution.

And it's telling that, while B580 might beat up on the 159 mm2 AD107 die used in the RTX 4060, Intel isn't even pretending it can compete with AD107 and the RTX 4060 Ti (especially 16GB version). That's only a 188 mm2 die, so however you want to slice it, Nvidia's chip gets better performance out of a much smaller die.
Ada is pretty clearly very optimized for compute density. I think it's the most impressive parts of the entire 40 series lineup. At the same time that exposes the cynicism of the 40 series pricing structure.

Even though I imagine they got a good deal on the Samsung node used for the 30 series I can't imagine they were happy with everything 3080/$700+ using the same die. Then there's the die sizes in general for the rest of the SKUs. Even though there's no doubt the TSMC node costs more the higher amount of die per wafer likely at worst case held the same margins.

I think this likely had at least a part in why the optimizations happened the way they did.
 
I'd wager AMD's RX 8600 will either follow the same path, or perhaps get GDDR7 with a 128-bit interface and 3GB chips for 12GB total. RTX 5060 will probably be 128-bit and 3GB chips as well, for 12GB. And bandwidth will also be similar maybe to GDDR6 on 192-bit (possibly higher, depending on the GDDR7 clocks they can realize).

I seriously doubt we'll see GDDR7 on 8600 / 5060. Not because it isn't a good idea to do so, but b/c Nvidia and AMD haven't been concerned with more performance on the low-mid end cards in several generations. It wasn't that long ago, the 760 was a 256 bit bus card.

My bet would be they'll use leftover GDDR 6, or GDDR 6X in Nvidia's case if we're lucky.
 
I seriously doubt we'll see GDDR7 on 8600 / 5060. Not because it isn't a good idea to do so, but b/c Nvidia and AMD haven't been concerned with more performance on the low-mid end cards in several generations. It wasn't that long ago, the 760 was a 256 bit bus card.

My bet would be they'll use leftover GDDR 6, or GDDR 6X in Nvidia's case if we're lucky.
You're probably right about GDDR7 on the lower spec GPUs, but I can dream! I mean, Nvidia and AMD both need to move beyond 8GB for the base models, or they will get destroyed in reviews.

As for 256-bit interfaces, back in the days of the GTX 760, memory interfaces were still scaling with process nodes. Now, the external memory interfaces on 4N (5nm class) are probably almost the same size as the interfaces on 8N (10nm class). Basically, each memory interface now costs almost twice as much die area as before.

Mind you, Nvidia and AMD could absolutely do wider than 128-bit on lower tier cards. But then you also end up with more VRAM capacity, more complex PCB, etc. and it all adds up.

I figure if you take something like AD106 with a 128-bit interface and change nothing else but go to a 192-bit interface — plus going from 8GB GDDR6 to 12GB — the total bill of materials probably increases by $75. On a $300 part, that would be very significant. Intel pretty much has to go there because it's the underdog. AMD and Nvidia may not. We'll have to wait and see.

But 8GB GPUs are totally a dead end now. Both games and AI need more than 8GB, so you've built in obsolescence if you choose to market a new 8GB card.
 
Also FYI, I've gone through the benchmark pages (gaming, proviz, AI, and power) and updated all the descriptions with more analysis and notes from my testing. TLDR: search for "driver" on the gaming pages and you'll see there are numerous mentions of Intel's drivers perhaps not being as optimized as they should be. That's based on performance of the B580 relative to A770/A750 in other games, as well as other performance oddities and even instability in a couple of games.

So, no, Intel is not out of the woods on driver quality yet. It's better now but remains a concern. It's something that only can improve with more time and effort, and it does look like Intel is putting in plenty of work (as evidenced by the 50+ drivers in two years for Arc). But if you're the type of gamer who would be angry about bad performance on launch day for a new game? I'd stick with AMD and Nvidia still.
 
I'd like to see an actual $250 version. Plus, I need a 2 slot thinner GPU. With the power draw I don't think the b580 will be a size I can use. But I'll check Microcenter when I get there and see what they got.
 
i cant wait to see the specs of the B770 .. if it drops at $400usd and we see specs of 4080 /7900xtx or around that we have a solid cheap card !!

I think at the very least the B770 will be 7900xt 4070ti lvls of performance ..

If i see 7900xtx or 4080 which may be super optimistic i may swap to intel from my current 7900xtx
 
Sorry, but I strongly disagree on this one. 16GB on a budget $250 GPU is basically stupid. That either requires clamshell memory and a 128-bit interface (4060 Ti and 7600 XT), or else a 256-bit interface (like A770 and RTX 4080).

If you look at those clamshell cards, because of the lack of raw bandwidth you end up with higher performance from RX 6750 XT (which also has more raw compute). You need to scale VRAM bandwidth and capacity with compute, and there's nothing that suggests B580 needs 33% more bandwidth or capacity for gaming.

Memory interfaces don't scale well at all with the latest process nodes, so you'd get a much larger chip even if everything else stays the same. I'm sure B770 will be 256-bit and 16GB, and will also be 32 Xe-cores and thus have a reason for the additional bandwidth and capacity. Let's hope Intel does this with a price of $349 or less.

There are markets like AI where 16GB would have been nice to have, sure. But for a mass produced part designed to gain market share, I think Intel nailed the specs. 192-bit and 12GB is the sweet spot. You can easily do 1440p ultra, with GDDR6 memory on only one side of the PCB.

I'd wager AMD's RX 8600 will either follow the same path, or perhaps get GDDR7 with a 128-bit interface and 3GB chips for 12GB total. RTX 5060 will probably be 128-bit and 3GB chips as well, for 12GB. And bandwidth will also be similar maybe to GDDR6 on 192-bit (possibly higher, depending on the GDDR7 clocks they can realize).

But any new GPU needs 12GB to handle up to 1440p with room to spare in most cases. 16GB is needed for a 4K card, and no $249 part is realistically targeting native 4K. Ever. By the time we have $249 card with 16GB, we'll probably have games that mean such cards will only manage 1080p and maybe 1440p still. 😛
I predict the B770 will have 20gb of GDDR6.. i suspect it will be at a $350 $400 usd mark which if it can get close to the likes of a 4080 7900xtx it will be a serious contender !!!
 
Status
Not open for further replies.