• Happy holidays, folks! Thanks to each and every one of you for being part of the Tom's Hardware community!

Review Intel Arc B580 review: The new $249 GPU champion has arrived

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
That's not accurate, the value is only how much VRAM has been allocated not what is currently in use. As I've stated modern engines do not evict old resources until they absolutely have to. Actually it's not really the game engine but WDDM that manages this. You have a lot more graphics memory then you do VRAM, run dxdiag and check Display 1 if you want to see how much you have available. In my case I have 32GB of system memory and a 3080 hydro (12GB) card, WDDM will report me as having 27.7GB of available graphics memory, meaning there can be a total of 28GB of graphics resources loaded at once.

Now that region is obviously split into two, 11.8GB display memory and 15.9 of system memory and it's WDDM's job to manage which of the two places graphics objects are located. I'll use a simple five room game level that is using dynamic loading (since nobody likes loading screens) to illustrate.

Game starts - loads 1 GB of common assets.
Player enters Room 1, 4GB of graphics assets are loaded.
Player enters Room 2, 2GB of new graphics assets are loaded that didn't exist in Room 1.
Player enters Room 3, 2GB of new graphics assets are loaded that didn't exist in either of the previous rooms.
Player enters Room 4, 2GB of new graphics assets are loaded.

How much total memory is "allocated" vs "needed" in this scenario? GPU-Z and other tools will show 11~12GB of total VRAM "in use" or "allocated", because there is indeed 11~12GB of total graphics resources loaded by WDDM. How much is "needed" at any one time is only 4~6 GB. Players inside Room 4 do not need the assets from Room 2 that aren't present in Room 4 and when the player is moving between rooms the engine is asking WDDM to pre-load the new assets, which is then moves into graphics VRAM. WDDM won't unload pre-existing assets unless it runs out of VRAM because they may be referenced again. Think of unused VRAM as a rudimentary graphics cache.

Now that we understand that, what is the resource that consumes the most VRAM, textures. This is where graphics pre-sets become very important. Ultra setting is almost always this unreasonably large texture sizes, as in the textures are larger then the display resolution. On a large 4K display with a high powered card these could create slightly better detail then textures half their size, but on a 1080~1440p display with a low powered card they are utterly useless. Reducing just the texture size alone will dramatically reduce the VRAM needed at any one moment in time.

More then 8GB of VRAM is only necessary on the larger more expensive cards using higher resolutions and ultra texture sizes. Nobody buying a 4060 / 4060Ti is going to be playing on a high resolution display with "ultra" settings.
This, again, is not theoretical: it has been proven to be the case. Not even on extreme edge cases, but normal usage within games.

Sorry, but you're wrong on this one. How the technicality works behind the scene is quite moot.

Regards.
 
So GPU VRAM is built kinda differently then system VRAM even though they use the same general concepts.

Every GPU chip has a memory bus, which is really just a cluster of memory management units. The industry "standard" size for DRAM is 32-bits per chip. For a GPU like the 4060 with 128-bit memory bus, that is four 32-bit channels each going to a single 16Gb chip. Four of them together is 64Gb which converts to 8GB. If we were to add two more 32-bit interfaces to the GPU we get six channels for 192-bit bus with 96Gb (12GB) of VRAM. Add two more channels for eight and now we are at 16GB of VRAM and 256-bit bus. Now all DRAM has the ability to be daisy chained, you can have more then one chip per channel and this is what they did with the 16GB 4060Ti. The memory clock speed might have to go down a little bit to compensate for the longer signal path to the second chip though. For GDDR6 this is called clamshell mode, and it's more that each chip gets half the bus.

That is why VRAM can not be arbitrary, it's always sized in discrete chunks of number of memory channels * density of chips. Currently GDDR6 is made in 8Gb (1GB) and 16Gb (2GB) varieties, GPU makers are using the higher density node with device manufactures using the lower density one. GDDR7 was going to be the same but Samsung kinda broke the mold with a 24Gb (3GB) chip.


I'm hoping next generation cards use these new chip densities, but nVidia is not very consumer orientated right now so it might be up to AMD and Intel.
I was going to reply with more technical details, but I'll just remind you the GTX970 "4"GB exists.

I didn't write "support" in quotation marks to be fancy :)

Regards.
 
That's not accurate, the value is only how much VRAM has been allocated not what is currently in use. As I've stated modern engines do not evict old resources until they absolutely have to. Actually it's not really the game engine but WDDM that manages this. You have a lot more graphics memory then you do VRAM, run dxdiag and check Display 1 if you want to see how much you have available. In my case I have 32GB of system memory and a 3080 hydro (12GB) card, WDDM will report me as having 27.7GB of available graphics memory, meaning there can be a total of 28GB of graphics resources loaded at once.

Now that region is obviously split into two, 11.8GB display memory and 15.9 of system memory and it's WDDM's job to manage which of the two places graphics objects are located. I'll use a simple five room game level that is using dynamic loading (since nobody likes loading screens) to illustrate.

Game starts - loads 1 GB of common assets.
Player enters Room 1, 4GB of graphics assets are loaded.
Player enters Room 2, 2GB of new graphics assets are loaded that didn't exist in Room 1.
Player enters Room 3, 2GB of new graphics assets are loaded that didn't exist in either of the previous rooms.
Player enters Room 4, 2GB of new graphics assets are loaded.

How much total memory is "allocated" vs "needed" in this scenario? GPU-Z and other tools will show 11~12GB of total VRAM "in use" or "allocated", because there is indeed 11~12GB of total graphics resources loaded by WDDM. How much is "needed" at any one time is only 4~6 GB. Players inside Room 4 do not need the assets from Room 2 that aren't present in Room 4 and when the player is moving between rooms the engine is asking WDDM to pre-load the new assets, which is then moves into graphics VRAM. WDDM won't unload pre-existing assets unless it runs out of VRAM because they may be referenced again. Think of unused VRAM as a rudimentary graphics cache.

Now that we understand that, what is the resource that consumes the most VRAM, textures. This is where graphics pre-sets become very important. Ultra setting is almost always this unreasonably large texture sizes, as in the textures are larger then the display resolution. On a large 4K display with a high powered card these could create slightly better detail then textures half their size, but on a 1080~1440p display with a low powered card they are utterly useless. Reducing just the texture size alone will dramatically reduce the VRAM needed at any one moment in time.

More then 8GB of VRAM is only necessary on the larger more expensive cards using higher resolutions and ultra texture sizes. Nobody buying a 4060 / 4060Ti is going to be playing on a high resolution display with "ultra" settings.
Not really, the games now allocate as such is to a large extend Because the main stream only have 8gb, so for a game to make it remotely relevant and money making the developers need to write it as such.

When the era of main steam only have 4gb the same argument also is true, once NVIDIA get every card 8gb in a year all new releases utilise 8gb well.

It’s not over 8gb is useless, it’s the developer can’t utilise more in order to make the game sell
 
It's not the memory size, it's the memory bandwidth. You even demonstrated as much.

https://www.tomshardware.com/reviews/nvidia-geforce-rtx-4060-ti-16gb-review

Going from 8 to 16GB on an even stronger card didn't move the needle at all at 1080p
That's from when exactly, running games from when? Yeah. August 2023, with games that were mostly 2022 and earlier releases.

In the last year, I've seen a LOT more games pushing beyond 8GB. It's been the tipping point. Pre-2023, 8GB was sort of enough but 12GB was safer. Now? I wouldn't touch a brand-new card with only 8GB if it costs more than $200.

And sure, you can play at 1440p high and probably not run out of VRAM even with 8GB. But why would you want to get a card that you know is at the very limit of a "safe" amount of VRAM? We've had 8GB cards since 2014. Ten years.

With AI becoming a big enough deal, non-gaming is potentially even more important to have more than 8GB of memory. Now Intel is saying, "Here's a 12GB card with a $249 MSRP." That's the bar to clear. Slightly higher performance with less VRAM isn't going to cut it for me at least.

am i missing something? Title says $249 GPU but all the pricing says $390... that's not even close. or is that the pre-release high demand price gouging at work?
$249 MSRP. It's the first week after launch, technically still just days after, and it's the Christmas / holiday shopping season. This is a great card at $249, as long as you're willing to give the drivers a pass. Practically every GPU right now is more expensive than it was last month as well, due to people buying things for Christmas. Give it a month or so, we should see sufficient stock to hit the $249 MSRP. That's my bet.

If you really want one now? Get on every notification list and be prepared to act immediately. But there may be bots hitting supply first. (I'd be very leery of trying to scalp the B580, though! I mean, it's a great card at $250. It sucks as a $350+ card when you can buy a vastly superior 7700 XT for around that price.)
 
But for most consumers, PC is a second thought for gaming, which not necessarily be clicking everything on at ultra, for those who allocate their disposable income, or their pocket money/xmas gift, there is a hard price cap to divide between various components, so for those who have price cap at or below the 4070, if the BMG card is the best option under the price bracket, they will just use that instead.
Most consumers in my opinion prefer PC gaming but are scared off by a few things ..

1. Console gaming while not as big games wise is much cheaper.

2 which ties in with 1 PRICE PC gaming is much more expensive even a semi crappy computer let alone a decent computer will set you back more than a console !!

3. PC by default is more scary to own with parts that can fail more often ( your average console is for the most part alot more robust ) ..

Ive currently got ( in my sig ) 7800x3d 7900x3d computer playstation 5 ( the pro felt like a scam so despite my ps4 to ps4 pro upgrade i decided t skip the pro this gen ) and the X bone series X ..

With the current Nature of releases PC is becoming alot more viable with console ports / straight to PC games coming out..

But for the most part average consumers are left wondering how to afford a computer and what should i get parts wise and all that being way more expensive than a console ..

Do i build so i can upgrade?? but that in its self is scary for first time builders !!

Do i buy pre built or over price BS that are throw away items like Alienware etc etc ?

( for refence my first ever computer was a alienware aurora R7 cost me $2500aud and was the reason why 2 years later when i realized there was nothing i could do with it to really upgrade inspired me to build my own and ive never looked back )

This is where things like Intel's cheaper GPU's and AMD AM4 platforms are great to buy and build with !!

Ive built a few computers now ( check my builds on page 199 of the members systems ) ive also done a all intel build which was 14600kf and the ARC770 16gb and ive rebuilt my main in many cases over the last few years now its in the thermaltake tower 300 with vertical stand ( all scattered over the page199 to 204 of the member systems thread )

i recommend everyone trying there hand at it there are plenty of videos on youtube on how to do it ( thats how i built my first computer a 3700x 5700xt system )

Once you build you will never go back to pre build or bought rubbish !!

I love my consoles dont get me wrong less to go wrong and just easier BUT for sheer amount of games and stuff you can do with a computer i think in the next 10 years console gaming will be gone everything will be pc hand helds and mobile !!
 
Last edited:
Well, 2080 ti running for $500-800 dollars right now, so I think your argument is hollow. I am not sure I would want to pick up a used burned out data miner for cheaper either.
I did check "buy instantly" eBay.de listings for the RTX2080ti before I wrote this (and again just now, to make sure)...

They hover at around €300 here and one Palit identical to mine is at €275 for immediate sale.
The cheapest B580 in stock from Proshop remains at €320.

Of course that's old vs new, and I wouldn't just blindly recommend that to everyone.

But for me it takes some shine off Intel's top offer and has me worried about the chances of their GPU line.

I also seem to remember that the RTX 2080ti was one of the worst GPUs when everyone was publishing crypo break-even tables, so I'm not sure they have been used that way.

Mine certainly wasn't. I bought it for CUDA work with machine learning and it was rather quickly moved aside when an RTX 3090 came with 24GB of RAM. It's as good as new and I am happy to see how well it works with a Ryzen 5800X3D I passed on to one of my sons with a 3k monitor: no need for a new GPU at all!

I've tried to say two things:
  1. I'm very sceptical of the B580, because it's underperforming for the RAM and Wattage they put in vs. much older hardware from Nvidia. I see some validation looking at actual availability and prices of the B580 vs review hype

  2. You may be able to get the gaming performance you need by other means and devices, which you might actually get cheaper
I have the luxury (dearly paid) of having a relatively large number of GPUs around for testing via a large family that receives rolling upgrades whenever I retire my home-lab equipment as well as the PCs that used to go along with them.

And while I am doing the next round of cross-grades for Christmas, I'm running benchmarks to check for stability, and test for bottlenecks, but also to refresh my "presets" in terms of what you need in hardware to match what people do or play to what I have at hand, which covers a vastly bigger production date range (but way fewer samples) than TH would keep around for testing.

And sometimes 10 year old hardware gets surprisingly good results these days, for different reasons.
  • sometimes GPU driver optimiziations breathe new life in older hardware, e.g. AMD's and Intel's variants of DLSS can provide critical boost for Nvidia Maxwell and Pascal GPUs. Likewise DLSS works a lot better with RTX 20* these days. A 980ti with 6GB turned out completely ok for lots of stuff at 1920x1080 and the RTX2080ti is perfectly fine at 3k, quite often even acceptable at 4k.

  • some games have also become surprisingly good at using multi-core CPUs, while others stil remain a disaster. I have a Broadwell Xeon 22-core CPU which theoretically boosts to 3.8 GHz, but never goes above 2.8 on game loads.

    That's quite normal for workstation CPUs and the reason they were never recommend for gaming. But ever since desktop core counts exploded, quite a few games have become rather good at using many cores. I ran a lot of titles that were pushing the limits the last few yeares and found that they were able to saturating that RTX 2080ti near 100% GPU load and delivering nearly the same results there as when I put it into a Ryzen 7 5800X3D.

    And those same CPU and GPU optimization even benefit CPUs like an Ivy Bridge i7-3770, which also played a lot of titles rather well with the GTX 980ti at 1920x1080. They try your patience loading (often single CPU core heavy), but then run much better than I thought possible.

    And then there is titles that still haven't changed much from Pentium IV days like FS2020 and FS2024, that fail miserably on that Xeon with only 2.8GHz of Broadwell IPC, while they work "just as well" with 16 Ryzen 9 7950X cores on my RTX 4090 as with only two cores permitted by Project Lasso.
 
Most consumers in my opinion prefer PC gaming but are scared off by a few things ..

1. Console gaming while not as big games wise is much cheaper.

2 which ties in with 1 PRICE PC gaming is much more expensive even a semi crappy computer let alone a decent computer will set you back more than a console !!

3. PC by default is more scary to own with parts that can fail more often ( your average console is for the most part alot more robust ) ..

Ive currently got ( in my sig ) 7800x3d 7900x3d computer playstation 5 ( the pro felt like a scam so despite my ps4 to ps4 pro upgrade i decided t skip the pro this gen ) and the X bone series X ..

With the current Nature of releases PC is becoming alot more viable with console ports / straight to PC games coming out..

But for the most part average consumers are left wondering how to afford a computer and what should i get parts wise and all that being way more expensive than a console ..

Do i build so i can upgrade?? but that in its self is scary for first time builders !!

Do i buy pre built or over price BS that are throw away items like Alienware etc etc ?

( for refence my first ever computer was a alienware aurora R7 cost me $2500aud and was the reason why 2 years later when i realized there was nothing i could do with it to really upgrade inspired me to build my own and ive never looked back )

This is where things like Intel's cheaper GPU's and AMD AM4 platforms are great to buy and build with !!

Ive built a few computers now ( check my builds on page 199 of the members systems ) ive also done a all intel build which was 14600kf and the ARC770 16gb and ive rebuilt my main in many cases over the last few years now its in the thermaltake tower 300 with vertical stand ( all scattered over the page199 to 204 of the member systems thread )

i recommend everyone trying there hand at it there are plenty of videos on youtube on how to do it ( thats how i built my first computer a 3700x 5700xt system )

Once you build you will never go back to pre build or bought rubbish !!

I love my consoles dont get me wrong less to go wrong and just easier BUT for sheer amount of games and stuff you can do with a computer i think in the next 10 years console gaming will be gone everything will be pc hand helds and mobile !!
One more point to add for me is that in recent YEARS there isn't really much game that didn't disappoint sens the graphics... we don't stand there looking at grass waving all day long, and the gaming side have been massive ruined by those micro transections and that very wallet unfriendly priced GPUs stayed at, let alone the way over done social themes... for the really young who didn't experienced what real "games" should be, they..... just can't afford those multi grand PC, for those who can... when someone like myself just google on next AAA games preview/review... so often it's going: arrrgh my 3070 will run for a few more years before anything worth spending another grand on a GPU...
 
Last edited by a moderator:
Status
Not open for further replies.