News Ryzen 7 5800X3D Beats Core i9-12900KS By 16% In Shadow of the Tomb Raider

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
"Hardware is outpacing Software"

looks at raytracing, accurate physics and global illumination

Hm... Yeah, not quite.

The big majority of people is still on hardware that struggles with moving the quality sliders to the right, so it's more of a chicken and the egg situation.

Regards.
ray tracing is pretty useless in a lot of games that even support it. Not sure what those other 2 things are but I think many would agree hardware is outpacing software to an extent and I think its mainly due to increased core counts. Like AMD making 12 and 16 core CPUs mainstream. Totally overkill for gaming. Same with the 12900k and kind of the 12700k. And for GPUs, 24gb of vram is titan level stuff. It kind of seems like companies are taking mainstream and work-type parts and mushing the lines together.
 
ray tracing is pretty useless in a lot of games that even support it.
Ray tracing only appears useless because of the limited applications it's been in so far (most ray traced games don't use ray tracing for everything, just for some things) and that graphics artists have been pretty good at faking what it offers. But the problem is that faking it requires a lot of effort on the production end to make it look right. Ray tracing allows for artists to place lights and not need to do anything else to get the lighting "correct"

The three biggest things I can think of that ray tracing combats that most games still have issues with are:
  • Screen space reflections, especially when there's no cube map fallback. It jars me to no end that I'm looking at a reflective surface and all I have to do is look away enough for the reflection to disappear. And if there's something between the camera and the reflective surface, that something gets included in the reflection. One of the biggest fails I've seen with this was with Far Cry 5, where I was flying in a helicopter over a lake. Not only did the lake reflect the helicopter's frame (or at least, it obscured the reflection), but all I had to do was look down until the horizon was above the top of the screen and the water appears black. I'm pretty sure that's not what happens in real life.
  • Shadow details that have a short cut off distance. It's a real immersion breaker when I see nice detailed shadows, then not 30 or so feet away, there's a hard line where they become pixelated blobs
  • Lighting that doesn't make sense. I first noticed this when I was playing Skyrim and saw the sun's light reflecting off water. In the shadow of a building. And since then I've noticed plenty of this issue where things are lit up with no discernible explanation.
 
  • Like
Reactions: drivinfast247
ray tracing is pretty useless in a lot of games that even support it. Not sure what those other 2 things are but I think many would agree hardware is outpacing software to an extent and I think its mainly due to increased core counts. Like AMD making 12 and 16 core CPUs mainstream. Totally overkill for gaming. Same with the 12900k and kind of the 12700k. And for GPUs, 24gb of vram is titan level stuff. It kind of seems like companies are taking mainstream and work-type parts and mushing the lines together.

The problem is, we aren't getting the significant gains in the areas that matter for gaming---cpus would benefit FAR more from significant single core performance improvements than 16 cores, but the problem is that ever since we hit that 5ghz brick wall years ago, the gains in that single core performance area are usually not enough to move silicon. So, Intel and AMD have moved to increasing core counts to sell cpus.
 
  • Like
Reactions: jacob249358
Clock speed wall?
You realize that efficiency has improved too, right?

The amount of computation per GHz in a single core is much greater today than back when it was all about Moar Hertzes.
It's not about the computation throughput though, making a core wider is just as beneficial as adding more cores for a single heavy game, as in not much.
A game doesn't need to run many instructions in parallel on the same core it needs to run single low IPC instructions as fast as possible.
 
Complete Parent Comment - those without kids will seriously not appreciate. The value I see in AMD refresh chips is sustaining the investment in the kids' gaming PC. If I can spend $200-$450 on a CPU upgrade and/or $500-$800 on a GPU upgrade (now that they are available and I live near a Microcenter) to keep their X570 / B450 - Zen 2 - DDR4 PC relevant another 4 years, that is a better value proposition than spending $300 to $400 on DDR5 RAM, $200-$300 on an Intel MB, $500-800 on 4000 series GPU, and CPU $200-$450 on Alder Lake CUP. Remember, I am not maintaining just one gaming PC, but three. So, the saving per PC become meaningful.

Just like hand-me-down clothes, no kid wants hand-me-down PC parts.

Personally, unless I was planning on buying an RTX 4000 series GPU (PCIE 5), I don't see the value in jumping into Alder Lake or Zen 4 right now. IF you have a RTX 3000 series GPU and are going rock that for 2-4 years, why bother upgrading if you are on a PCIE 4.0 motherboard.
 
.

Personally, unless I was planning on buying an RTX 4000 series GPU (PCIE 5), I don't see the value in jumping into Alder Lake or Zen 4 right now. IF you have a RTX 3000 series GPU and are going rock that for 2-4 years, why bother upgrading if you are on a PCIE 4.0 motherboard.

You don’t even need PCIE4.0 for 3000 series. The pc spec NVIDIA used for their release of 3000 series and all benchmarks used 3.0. Maybe RTX4000 will see a benefit of 4.0 over 3.0 but I highly doubt you will need a 5.0 setup to get the full potential of the gpu.
 
You don’t even need PCIE4.0 for 3000 series. The pc spec NVIDIA used for their release of 3000 series and all benchmarks used 3.0. Maybe RTX4000 will see a benefit of 4.0 over 3.0 but I highly doubt you will need a 5.0 setup to get the full potential of the gpu.

@sizzling Thanks for the feedback and information.

If PCIE 5.0 is not needed to get the full potential of RTX 4000, the incentive to jump to Alder Lake / Zen 4 becomes just DDR5 - which is expensive at the moment. Seriously, extend the life of your Zen 2 investment. If you have X570 / B550 with Zen 2, upgrade CPU to Zen 3. If you have Zen 2 on older board and want RTX 4000, replace the mother board, recycle the RAM, and upgrade to Zen 3. You would still be saving quite a bit on RAM, which could go to the better GPU or other kids PC.

At the end day, it is about getting the maximum life out of the parts you have on hand. Of course, this is from a parent who taught their kids how to overclock 3200 RAM kits to the maximum memory clock their CPUs could handle, so I could save few bucks per PC.
 
@sizzling Thanks for the feedback and information.

If PCIE 5.0 is not needed to get the full potential of RTX 4000, the incentive to jump to Alder Lake / Zen 4 becomes just DDR5 - which is expensive at the moment. Seriously, extend the life of your Zen 2 investment. If you have X570 / B550 with Zen 2, upgrade CPU to Zen 3. If you have Zen 2 on older board and want RTX 4000, replace the mother board, recycle the RAM, and upgrade to Zen 3. You would still be saving quite a bit on RAM, which could go to the better GPU or other kids PC.

At the end day, it is about getting the maximum life out of the parts you have on hand. Of course, this is from a parent who taught their kids how to overclock 3200 RAM kits to the maximum memory clock their CPUs could handle, so I could save few bucks per PC.
This is effectively what I plan to do. Currently running a 3700X, 2x16gb 3200mhz (OC’d @3600mhz) and a 3080, paired with a 1440p 240Hz monitor. If the 5800X3D is a better gamer than the 5800X/5900X it is going to allow me to get the most out of my current platform. By the time I need to upgrade DDR5 will have matured and should be faster and cheaper than the current offerings.
 
  • Like
Reactions: drivinfast247
Lol, I'am still rocking my i5@2500K@4.7Ghz + 16GB 2133Mhz RAM + 1070Ti for 1440p gaming, rock solid 60-75 FPS in modern games. This is unbelievable, a 11 years old CPU still going this strong.
I had a 2600k @4800mhz 16gb win7 . bought new rtx 2070 super. Put it in benched the crap out of it. Then built new ryzen 5600x 16gb put in the new rtx2070 super, rebenched everything . Man i was Shocked just how much it was holding back rtx2070 super!
I was, and you most definately are, well over due for an upgrade !
 
It's not about the computation throughput though, making a core wider is just as beneficial as adding more cores for a single heavy game, as in not much.
A game doesn't need to run many instructions in parallel on the same core it needs to run single low IPC instructions as fast as possible.
Right, and the poster I was responding to was complaining about the 5GHz wall.

Clock frequency is not the be-all, end-all, even for gaming. Just ask the Pentium 4.
 
Meh. The days of "fastest gaming processor" bragging rights are numbered if not already dead. Nobody games at 720p anymore where the CPU shows and not the GPU. That's CPU bench only. And fewer and fewer of even the most competitive frame chasing gamers are still gaming at 1080p as they move up to faster higher resolution 2K and 4K VA panels with ever more powerful GPUs on tap. AMD vs. Intel will make zero difference in your gaming FPS with your shiny new for 2022 LG 42" C2 series 4K OLED.
One thing to keep in mind is that game developers are still mostly targeting playable performance on the ancient AMD Jaguar processors used in the PS4 and Xbox One, which were a bit underpowered even when those consoles first launched close to a decade ago. So, PC CPUs might be overpowered relative to the hardware game developers are still designing their games around, but that probably won't be the case for long, as the install-base of new consoles is getting to the point where developers will start dropping the older platforms with increasing frequency. When they do that, many will start to make heavier use of the much faster Zen2 CPUs found in the new consoles, which are a lot closer in performance to today's mid-range CPUs. If a game targets 60fps on those consoles, making heavy use of things like physics and crowds of NPCs to push those processors to their limits, one shouldn't expect too much more performance on a PC, something relevant to those with high-refresh rate screens. And if a game targets 30fps on the new consoles, which will likely happen eventually, then good luck trying to maintain 60fps on any of today's CPUs. It might not be as relevant for the minority of people who replace their hardware all the time, but for the majority who keep a system for a number of years, differences in CPU performance will become more relevant as time goes on. And while upgrading graphics cards is relatively common, people tend to not upgrade CPUs as often.

ray tracing is pretty useless in a lot of games that even support it. Not sure what those other 2 things are but I think many would agree hardware is outpacing software to an extent and I think its mainly due to increased core counts. Like AMD making 12 and 16 core CPUs mainstream. Totally overkill for gaming. Same with the 12900k and kind of the 12700k. And for GPUs, 24gb of vram is titan level stuff. It kind of seems like companies are taking mainstream and work-type parts and mushing the lines together.
The problem with increasing core counts is that many algorithms used by games (and most other applications) can't be split between multiple cores. So performance will usually end up limited by the most demanding software thread on a single core, while other cores will tend to sit underutilized. Some routines can utilize additional cores more effectively, but a game is typically going to be limited by per-core performance more than anything, provided it has sufficient cores to go around for the less-demanding threads. Also, I would hardly call 12 and 16 core CPUs "mainstream" at this point, even if they are available on mainstream platforms. And while it's true that they might be "overkill" for gaming as far as their additional cores go, since games typically won't be designed to utilize them, gaming performance on those processors will still be limited by what a single core can do.

The 5800X3D might be really great at gaming, but a single 720p test result that flies in the face of AMD's own numbers is highly suspect. If AMD thought it could get 20% more performance across a wide selection of games, it would be doing a lot more promoting of the performance.
To be fair, AMD suggested the 5800X3D could push roughly 10% more performance than a 12900K in that game at 1080p with high settings (1.1x in their graph), so those results don't necessarily seem out of line with their claims. It's just that AMD based their claims on a somewhat more realistic resolution and settings where high-end graphics hardware may still impact performance to some degree, while this benchmark is based on unrealistic settings designed to show the maximum possible difference. I certainly wouldn't expect 20% to be common, and AMD hasn't suggested that either, but even they claimed at least one game may show that kind of difference at 1080p high with a high-end graphics card, and there will likely be other examples as well, so that may be more of an upper limit. Most games will of course be GPU-limited more than anything though, at least for existing titles running on today's graphics hardware.

At the end of the day, the reality is that Zen 3 is on its way out. So this being a stop gap solution to somewhat try and dull Intel Alder Lake’s advantage is only going to meet with very limited success. Furthermore, this chip is not exactly cheap. If it is not cheap and Zen 4 is just a couple of quarters away, then I see no reason to recommend buying it. If one is looking to upgrade from say Zen 1 or 1+, then there are cheaper Zen 3 alternatives which may not be as fast as the X3D in latency sensitive apps/ games, but will still provide good performance.
We could likewise say that Intel's current CPUs are "on the way out", as they too will require new motherboards for next-year's processor lineup. Zen3 is just now making its way to the mid-range, and it wouldn't be surprising if AMD were to keep their Zen4 lineup initially restricted to higher-end models, much as they did for AM4, since DDR5 is still cost-prohibitive for mid-range systems. Sure, someone targeting the high-end might be more inclined to wait for AM5 than to get a 5800X3D, but the same could be said for any given processor, as there's always something new around the corner.
 
Clock speed wall?
You realize that efficiency has improved too, right?

The amount of computation per GHz in a single core is much greater today than back when it was all about Moar Hertzes.

The problem is twofold---you cannot sell "core efficiency" or "IPC" to the general public anywheres near as easy as MHZ/GHZ. But you can with "number of cores".

And secondly jumps from say, 4ghz to 5ghz provided a far bigger gain in performance than generation-to-generation efficiency improvements. We've been stuck at about 5ghz ever since the underperforming mess that was the FX-9590 was on the market.
 
Interesting that they picked a game that shows some of the biggest differences in performance just within the 12th gen Intel family itself.
I think SofTR is probably one of the 'best case' scenarios for the extra L3 to really shine.

Still a valid result. Hopefully AMD fully utilizes its new stacked cache wizardry more frequently in the Ryzen 7000 chips.
 
"Hardware is outpacing Software"

looks at raytracing, accurate physics and global illumination

Hm... Yeah, not quite.

The big majority of people is still on hardware that struggles with moving the quality sliders to the right, so it's more of a chicken and the egg situation.

Regards.
Exactly.
The Unreal Engine 5's software-based ray tracing (lumens) is gonna be big. AMD cards show huge ray tracing gains (over NVIDIA RTX hardware) with the software-based lumens.
The engine does have an RTX hardware-based 'fork' but that, to me, looks like NVIDIA just asked the Unreal Engine devs how many zeros they want on the check to keep developing for RTX hardware. 😉

(possibly a bit cynical but, ¯\_ (ツ) _/¯ )
 
Exactly.
The Unreal Engine 5's software-based ray tracing (lumens) is gonna be big. AMD cards show huge ray tracing gains (over NVIDIA RTX hardware) with the software-based lumens.
The engine does have an RTX hardware-based 'fork' but that, to me, looks like NVIDIA just asked the Unreal Engine devs how many zeros they want on the check to keep developing for RTX hardware. 😉

(possibly a bit cynical but, ¯\_ (ツ) _/¯ )
I doubt it's purely RTX. Seems more like Lumen has support for hardware (DirectX Raytracing) and software. Software will work on older GPUs, but won't have the accuracy of hardware reflections and is apparently limited in some ways. From the UE5 pages:
----------------
Lumen Ray Tracing
Lumen provides two methods of ray tracing the scene: Software Ray Tracing and Hardware Ray Tracing.
  • Software Ray Tracing uses Mesh Distance Fields to operate on the widest range of hardware and platforms but is limited in the types of geometry, materials, and workflows it can effectively use.
  • Hardware Ray Tracing supports a larger range of geometry types for high quality by tracing against triangles and to evaluate lighting at the ray hit instead of the lower quality Surface Cache. It requires supported video cards and systems to operate.
Software Ray Tracing is the only performant option in scenes with many overlapping instances, while Hardware Ray Tracing is the only way to achieve high quality mirror reflections on surfaces.
Software Ray Tracing
Lumen uses Software Ray Tracing against Signed Distance Fields by default. This tracing representation is supported on any hardware supporting Shader Model 5 (SM5), and only requires that Generate Mesh Distance FIelds be enabled in the Project Settings.
The renderer merges Mesh Distance Fields into a Global Distance Field to accelerate tracing. By default, Lumen traces against each mesh's distance field for the first two meters for accuracy, and the merged Global Distance Field for the rest of each ray.
Projects with extreme overlapping instances can control the method Lumen uses with the project setting Software Ray Tracing Mode. Lumen provides two options to choose from:
  • Detail Tracing is the default method and involves tracing against the individual mesh's signed distance field for the highest quality. The first two meters are used for accuracy and the Global Distance Field for the rest of each ray.
  • Global Tracing only traces against the Global Distance Field for each ray for the fastest traces.
Mesh Distance Fields are streamed in and out based on distance as the camera moves through the world. They are packed into a single atlas to allow ray tracing.
----------------
Of course, there's the question of how much better the hardware vs. software methods look in actual practice. There are many instances where RT reflections, shadows, and lighting compared to non-RT variants only look a bit better and are not worth the performance hit. But then, I can say the same about ultra quality textures vs. high quality textures, and a bunch of other graphics effects as well. I'm frequently amazed at how good modern games look even at "medium" quality settings.
 
The "software ray tracing" method Lumen uses, based on the terminology being thrown around, is ray marching. I found a demo of ray marched reflections on ShaderToy and even on a Intel UHD 600 series iGPU, it gets about 20 FPS at 1080p. This technique has seen some use already in games, like CryTek's SVOGI global illumination.

There are many instances where RT reflections, shadows, and lighting compared to non-RT variants only look a bit better and are not worth the performance hit.
I think the difference though isn't so much about the image quality, but the amount of effort the artist needs to get that image quality using one method or another. From what I can gather about developers working with RT, they've pointed out that they don't need to do a lot of extra work, if any, to get the scene to look correct.

When I looked into Physically Based Rendering, there was a similar sentiment. The artist didn't have to do a lot of tweaking to make things look correct. It just looks correct by virtue of using accurate algorithms rather than guess work.
 
Last edited:
  • Like
Reactions: JarredWaltonGPU
I doubt it's purely RTX. Seems more like Lumen has support for hardware (DirectX Raytracing) and software. Software will work on older GPUs, but won't have the accuracy of hardware reflections and is apparently limited in some ways. From the UE5 pages:
----------------
Lumen Ray Tracing
Lumen provides two methods of ray tracing the scene: Software Ray Tracing and Hardware Ray Tracing.
  • Software Ray Tracing uses Mesh Distance Fields to operate on the widest range of hardware and platforms but is limited in the types of geometry, materials, and workflows it can effectively use.
  • Hardware Ray Tracing supports a larger range of geometry types for high quality by tracing against triangles and to evaluate lighting at the ray hit instead of the lower quality Surface Cache. It requires supported video cards and systems to operate.
Software Ray Tracing is the only performant option in scenes with many overlapping instances, while Hardware Ray Tracing is the only way to achieve high quality mirror reflections on surfaces.
Software Ray Tracing
Lumen uses Software Ray Tracing against Signed Distance Fields by default. This tracing representation is supported on any hardware supporting Shader Model 5 (SM5), and only requires that Generate Mesh Distance FIelds be enabled in the Project Settings.
The renderer merges Mesh Distance Fields into a Global Distance Field to accelerate tracing. By default, Lumen traces against each mesh's distance field for the first two meters for accuracy, and the merged Global Distance Field for the rest of each ray.
Projects with extreme overlapping instances can control the method Lumen uses with the project setting Software Ray Tracing Mode. Lumen provides two options to choose from:
  • Detail Tracing is the default method and involves tracing against the individual mesh's signed distance field for the highest quality. The first two meters are used for accuracy and the Global Distance Field for the rest of each ray.
  • Global Tracing only traces against the Global Distance Field for each ray for the fastest traces.
Mesh Distance Fields are streamed in and out based on distance as the camera moves through the world. They are packed into a single atlas to allow ray tracing.
----------------
Of course, there's the question of how much better the hardware vs. software methods look in actual practice. There are many instances where RT reflections, shadows, and lighting compared to non-RT variants only look a bit better and are not worth the performance hit. But then, I can say the same about ultra quality textures vs. high quality textures, and a bunch of other graphics effects as well. I'm frequently amazed at how good modern games look even at "medium" quality settings.

I was referring more to the specific RTX 'fork' (I think?).
I thought I read somewhere that, in addition to hardware-based ray tracing support, there a specific RTX fork. I don't think it brings anything more to the table except worse ray tracing performance on everything except RTX hardware.

Maybe they're one in the same though.
 
I was referring more to the specific RTX 'fork' (I think?).
I thought I read somewhere that, in addition to hardware-based ray tracing support, there a specific RTX fork. I don't think it brings anything more to the table except worse ray tracing performance on everything except RTX hardware.

Maybe they're one in the same though.
Unless that pipeline forces DLSS, I'm struggling to see what makes NVIDIA's implementation stand out enough that the application should care.

The only other thing that makes NVIDIA's RT cores stand out is NVIDIA includes BVH structure acceleration, whereas AMD did not include that. And details seem sparse on what Intel is doing with Xe, but this slide suggests they're not doing BVH structure acceleration either.
Intel-Ponte-Vecchio-7-768x423.jpg

However these things the application shouldn't care about.
 
Unless that pipeline forces DLSS, I'm struggling to see what makes NVIDIA's implementation stand out enough that the application should care.
The only other thing that makes NVIDIA's RT cores stand out is NVIDIA includes BVH structure acceleration, whereas AMD did not include that. And details seem sparse on what Intel is doing with Xe, but this slide suggests they're not doing BVH structure acceleration either.
However these things the application shouldn't care about.
Yeah, I don't know either.
I'll see if I can find the technical discussion I read. I don't think it was an easy Google though.
 
I had a 2600k @4800mhz 16gb win7 . bought new rtx 2070 super. Put it in benched the crap out of it. Then built new ryzen 5600x 16gb put in the new rtx2070 super, rebenched everything . Man i was Shocked just how much it was holding back rtx2070 super!
I was, and you most definately are, well over due for an upgrade !
No. I bought the 16gb ram just 2 years ago, and I will use them until the mainboard works. Maybe I will wait for AM5, but this year I will not upgrade. The 2500K can deliver 50-75FPS in modern games, which is my freesync range, so I don't even notice FPS drops. I don't need more FPS. 2500k rulez, but thanks for the suggestion. Ok there are sometimes small hickups in some games, but nothing disastrous. I finished God of War and Days Gone lately on 1440p.
 
No. I bought the 16gb ram just 2 years ago, and I will use them until the mainboard works. Maybe I will wait for AM5, but this year I will not upgrade. The 2500K can deliver 50-75FPS in modern games, which is my freesync range, so I don't even notice FPS drops. I don't need more FPS. 2500k rulez, but thanks for the suggestion. Ok there are sometimes small hickups in some games, but nothing disastrous. I finished God of War and Days Gone lately on 1440p.
Basically a new current gen cpu will deliver much faster frames , especially at 1920x1080 !