News Intel Teases Arc Limited Edition Discrete Desktop GPU

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

TJ Hooker

Titan
Ambassador
Depends on your definition of fine. The RX6500 on 3.0x4 vs 4.0x4 shows that there is a pretty good number of recent games where 4GB just doesn't cut it unless you have somewhat decent access to system memory for asset streaming. It will only get much worse for 4GB cards from here.

4GB for new gaming cards is NFG beyond lightweight games.
The 6500 XT still gets 60+ fps in most games at 1080p/medium. And even if it had 8GB VRAM, it wouldn't be capable of 60+ fps at ultra settings (based on the 6500 XT performing the same as the 5500 XT 4GB, and the 5500 XT 8GB not being capable of 1080p ultra 60+ fps). At settings where the 6500 XT GPU is actually capable of ~60 fps, it looks like the lack of VRAM makes <10% difference. In other words, 4GB VRAM can still be enough to get by for 1080p gaming.

Not that I'm recommending anyone buying a GPU today get a 6500 XT (or other 4GB GPU), unless it's dirt cheap and/or you're desperate.
 

InvalidError

Titan
Moderator
4GB VRAM can still be enough to get by for 1080p gaming.
It may be sort-of-enough for 1080p gaming but then you run into situations where the slower GTX1650 Super manages to outclass the RX6500 simply because it has 2-4X as much PCIe bandwidth to use system memory for streaming, which means the RX6500 really needed at least 6GB (or 4.0x8) to avoid this situation.

4GB isn't enough for the RX6500 to perform like it could/should.

Edit: Also, about the 4GB vs 8GB RX5500, although the RX5500 may not hit 60+fps at increased details, the 8GB models is still upwards of 70% faster than the 4GB model when you start cranking up details at still usable FPS.
 
Last edited:

jacob249358

Commendable
Sep 8, 2021
636
215
1,290
Depends on your definition of fine. The RX6500 on 3.0x4 vs 4.0x4 shows that there is a pretty good number of recent games where 4GB just doesn't cut it unless you have somewhat decent access to system memory for asset streaming. It will only get much worse for 4GB cards from here.

4GB for new gaming cards is NFG beyond lightweight games.
yeah I don't think there should be any new cards with less than 6 or 8gb
 
  • Like
Reactions: Soaptrail
Jan 3, 2022
67
13
35
what are you talking about? There are many 4/6gb cards that do fine at 1080p.
Minimum for what tier? High end? I would agree we're approaching this, but it's still going to be a generation away before it becomes an absolute minimum. For anything lower? No.

Baking lightmaps in Unity Progressive Lightmapper, rendering in 4K in Blender Cycles with GPU compute requires more VRAM. But if memory will become cheaper in future why shouldn't we utilize this resource more anywhere, rendering, games, graphics apllications,...
 

deesider

Distinguished
Jun 15, 2017
308
147
18,890
Baking lightmaps in Unity Progressive Lightmapper, rendering in 4K in Blender Cycles with GPU compute requires more VRAM. But if memory will become cheaper in future why shouldn't we utilize this resource more anywhere, rendering, games, graphics apllications,...
Sure, there are many scenarios in which large amounts of vram can be required. AI is a particular case where the model size is entirely constrained by vram.

There are no games that require 16GB and there are none on the horizon.

The rendering examples you have given can also be performed without using the gpu - so they aren't minimum requirements , rather they are tasks that can be accelerated with the right hardware.
 

jacob249358

Commendable
Sep 8, 2021
636
215
1,290
Baking lightmaps in Unity Progressive Lightmapper, rendering in 4K in Blender Cycles with GPU compute requires more VRAM. But if memory will become cheaper in future why shouldn't we utilize this resource more anywhere, rendering, games, graphics apllications,...
very few people do blender renders in 4k. 6 or 8gb is enough for 1080p.
 

InvalidError

Titan
Moderator
very few people do blender renders in 4k. 6 or 8gb is enough for 1080p.
Relatively few people do Blender at all. As for 6/8GB being enough, that may be true for now, though this has a tendency to change relatively quickly with each console cycle. Between the PS5 and X-SX, we can expect the minimum bar to rise relatively quickly like it does with every new console generation.
 
  • Like
Reactions: saltweaver
Intel better make their discrete GPUs on their own process/foundry soon, because short-term production at TSMC isn't a great long-term strategy. For Intel to realistically be able to survive as the 3rd player in the current two-player market, they need to bring a faster turn-around time in production from their own foundrys as one of their advantages (at some future point). It's the only way they will be able to be a viable 3rd option long-term, outside of their integrated GPUs.
 

InvalidError

Titan
Moderator
Intel better make their discrete GPUs on their own process/foundry soon, because short-term production at TSMC isn't a great long-term strategy.
Intel contracting stuff out to TSMC isn't new. Before its repeat 10nm delays, it was already contracting out lower-value stuff to save its own fab capacity for higher-value/margin parts. Even after Intel's new fabs come online, it will likely continue contracting out the less profitable bits and bobs that don't make much sense to make in-house while its own fab capacity is maxed out and cheaper third-party manufacturing is available.

With 2.5D/FOVEROS/EMIB/SoS/etc. becoming the norm with future products, expect contracting out miscellaneous interconnect parts to become the norm with fabs specializing in subsets of interconnect and 'lego' type components, especially if efforts to produce standardized 2.5D interfaces and footprints for reusable multi-vendor chiplets/tiles for boilerplate functions pan out.
 

Co BIY

Splendid
With 2.5D/FOVEROS/EMIB/SoS/etc. becoming the norm with future products, expect contracting out miscellaneous interconnect parts to become the norm with fabs specializing in subsets of interconnect and 'lego' type components, especially if efforts to produce standardized 2.5D interfaces and footprints for reusable multi-vendor chiplets/tiles for boilerplate functions pan out.

Based in the slide TerryLaze posted up Intel is planning to produce Xe series parts on several different processes inhouse and externally. The Rocket Lake reverse port shows that the advancement of development tools are allowing for more production/design flexibility.

Collaboration and Coopetition. - "They know our secrets."

Globally efficient production that saves the the construction of a $100 Billion Fab complex is a lot of savings/profit to share around!
 
Baking lightmaps in Unity Progressive Lightmapper, rendering in 4K in Blender Cycles with GPU compute requires more VRAM. But if memory will become cheaper in future why shouldn't we utilize this resource more anywhere, rendering, games, graphics apllications,...
If you're buying a midrange card to do this sort of work, then either you learn to live with it because that's all you could afford, or you wonder why you didn't spend more money on a higher end card.

Besides that, the amount of VRAM a card can have is limited by the number of memory channels it has and what was available at the time. Since 2GB chips have become the norm, sure we could get that 16GB minimum, but not every GPU would drastically benefit from a 256-bit bus. Not to mention more lines means more power consumed, and some of these GPUs have a power target to reach. And to spin up a new product line in the middle of the release cycle just to throw in more VRAM historically never really pans out because game developers aren't suddenly going to take advantage of it. Not to mention they probably wouldn't anyway because they'll be catering to most popular configuration which isn't the most recently released thing.
 

InvalidError

Titan
Moderator
Globally efficient production that saves the the construction of a $100 Billion Fab complex is a lot of savings/profit to share around!
While it may soften the up-front cost of building fabs, you still want to aggregate all of the manufacturing necessities in the same general area to save on transport. With an integrated multiplex, raw wafer and other input deliveries come in, fully assembled silicon comes out, saving billions of dollars and days/weeks/months in intermediate road/rail/sea transit over the next couple of years.
 

Giroro

Splendid
Baking lightmaps, using 4K textures requires as much VRAM as you can get. Also if VRAM is inexpensive as memory RAM DDR4, then all the better to use more of inexpensive resource. If you plan to game on 4K you would need solid VRAM GPU.

Native 4k gaming won't be mainstream for at least 5 years, if ever, depending on the progress of upscaling tech.

Also, 4k textures are beyond overkill and a compete waste of disk space for even 4k gaming. There's usually dozens/hundreds of textures on screen at once. Even 2k textures are rarely noticeable at 4k.
 

InvalidError

Titan
Moderator
Also, 4k textures are beyond overkill and a compete waste of disk space for even 4k gaming. There's usually dozens/hundreds of textures on screen at once. Even 2k textures are rarely noticeable at 4k.
"4k textures" doesn't mean that the textures are 4k, it only means that the texture pack has been optimized for 4k output. The actual size of textures needs to be appropriate to whatever scaling range the texture is intended to be used at. If you have an open-world game, you may want billboard-sized pictures at 4k-and-beyond actual resolution when normal game play will let players have both bird-eye's view (the whole texture has to exist) and close-up views (high enough resolution not to be jarring) at different times.
 
"4k textures" doesn't mean that the textures are 4k, it only means that the texture pack has been optimized for 4k output. The actual size of textures needs to be appropriate to whatever scaling range the texture is intended to be used at. If you have an open-world game, you may want billboard-sized pictures at 4k-and-beyond actual resolution when normal game play will let players have both bird-eye's view (the whole texture has to exist) and close-up views (high enough resolution not to be jarring) at different times.
No, the person is correct. "4K texture" means the texture size is 4K, or 4096x4096. If you don't believe me, here are some examples of people saying "4K textures" to mean just that:
https://www.unrealengine.com/marketplace/en-US/product/200-4k-noise-texture-pack/reviews
[URLhttps://www.artstation.com/marketplace/p/rvx5k/sci-fi-panels-free-4k-texture-pack[/URL]
[URLhttps://mcpedl.com/realismextreme-4k-bedrock/[/URL]
https://www.dsogaming.com/mods/this-18gb-skyrim-4k-mod-overhauls-all-of-the-creation-club-textures/
View: https://www.reddit.com/r/pcmasterrace/comments/9zzc0z/psa_4k_textures_does_not_mean_made_for_a_4k/

https://www.unrealengine.com/marketplace/en-US/product/20-unique-sci-fi-materials-4k-texture
https://www.nexusmods.com/fallout4/mods/52423

And yes, I've see people say "4K textures" to mean "optimized for 4K resolution", but that's not the historically typical use for the term.

However, I will say whatever resolution the textures need to be is completely arbitrary. There's no point in applying a 4K texture to a tiny asset. You're just wasting VRAM at that point.

EDIT: If you want more evidence that nK means the resolution size, here's an 8K texture for Minecraft: https://www.planetminecraft.com/texture-pack/i-j-o-ice-texture-pom-pbr-low-end-pc-high-end-pc/

I'm sure someone is dying to play Minecraft in 8K.
 
Last edited:
Jan 3, 2022
67
13
35
One feature I haven't mentioned but has gained a lot of track lately is RXTGI and RTXDI (Direct Illumination).
RTXDI uses GI cache for realtime rendering thousands and thousands of lights without any lightmaps. This was shown with 2080Ti 11GB with 30FPS. This is relatively small scene.
Timestamp:
View: https://youtu.be/GpyDXAartJ8?t=7788

Now imagine this in open world game or some larger scene.

This is experimental tech and is not known if it will be industry standard.
Samsung's new GDDR7 memory has close to 50% more memory bandwidth at 32Gbps. So for a developer and maybe some experimental cut of the edge realtime rendering and games using it, VRAM will play as resource for GI data not only for lightmaps and textures.
 

escksu

Reputable
BANNED
Aug 8, 2019
877
353
5,260
Depends on your definition of fine. The RX6500 on 3.0x4 vs 4.0x4 shows that there is a pretty good number of recent games where 4GB just doesn't cut it unless you have somewhat decent access to system memory for asset streaming. It will only get much worse for 4GB cards from here.

4GB for new gaming cards is NFG beyond lightweight games.

Well, even if you give these cards 8-16gb, its not going to work because gpu is the bottleneck.

We are talking about really low end sub 100w cards here.
 

escksu

Reputable
BANNED
Aug 8, 2019
877
353
5,260
One feature I haven't mentioned but has gained a lot of track lately is RXTGI and RTXDI (Direct Illumination).
RTXDI uses GI cache for realtime rendering thousands and thousands of lights without any lightmaps. This was shown with 2080Ti 11GB with 30FPS. This is relatively small scene.
Timestamp:
View: https://youtu.be/GpyDXAartJ8?t=7788

Now imagine this in open world game or some larger scene.

This is experimental tech and is not known if it will be industry standard.
Samsung's new GDDR7 memory has close to 50% more memory bandwidth at 32Gbps. So for a developer and maybe some experimental cut of the edge realtime rendering and games using it, VRAM will play as resource for GI data not only for lightmaps and textures.

All these memory will work ONLY when gpu itself isnt the bottleneck.

3 main bottlenecks for graphics card, gpu (core), memory bandwidth and memory amount. If either one is bottleneck, then improving the other 2 doesnt work.
 

InvalidError

Titan
Moderator
Well, even if you give these cards 8-16gb, its not going to work because gpu is the bottleneck.

We are talking about really low end sub 100w cards here.
The 8GB RX5500 is upwards of 70% faster than the 4GB RX5500 when you bump details slightly beyond what fits in 4GB while the GPU is still capable of delivering usable (30+fps) performance.

The RX6500 has more raw performance but it gets massively hampered the instant it gets pushed beyond 4GB, sometimes to the point of performing worse than the much slower GTX1650S.

Having a 8GB option on entry-level GPUs is perfectly fine and basically mandatory if you want it to have any meaningful longevity beyond graphically trivial modern games. Even more so if you account for the 1-4GB of VRAM that may get in use by background software like browsers, Skype, Discord, OS, etc.
 

deesider

Distinguished
Jun 15, 2017
308
147
18,890
One feature I haven't mentioned but has gained a lot of track lately is RXTGI and RTXDI (Direct Illumination).
RTXDI uses GI cache for realtime rendering thousands and thousands of lights without any lightmaps. This was shown with 2080Ti 11GB with 30FPS. This is relatively small scene.
Timestamp:
View: https://youtu.be/GpyDXAartJ8?t=7788

Now imagine this in open world game or some larger scene.

This is experimental tech and is not known if it will be industry standard.
Samsung's new GDDR7 memory has close to 50% more memory bandwidth at 32Gbps. So for a developer and maybe some experimental cut of the edge realtime rendering and games using it, VRAM will play as resource for GI data not only for lightmaps and textures.
This is great stuff, but possibly now over shadowed(?) by UE5 lumen, which has the benefit of not being locked into a hardware vendor. Still, I think its great that GI rather than baked lighting is to become the norm.
 
Jan 3, 2022
67
13
35
This is great stuff, but possibly now over shadowed(?) by UE5 lumen, which has the benefit of not being locked into a hardware vendor. Still, I think its great that GI rather than baked lighting is to become the norm.
Lumen is UE locked. RTXGI and RTXDI you can run on any GPU compute device with Dx12 and can be integrated in any engine or rendering app.
 

deesider

Distinguished
Jun 15, 2017
308
147
18,890
Lumen is UE locked. RTXGI and RTXDI you can run on any GPU compute device with Dx12 and can be integrated in any engine or rendering app.
I'm impressed - that's uncharacteristically generous of Nvidia. Seems a much more efficient way of using raytracing hardware than using the rays to directly render.