News Ryzen 7 5800X3D Beats Core i9-12900KS By 16% In Shadow of the Tomb Raider

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Even if it just matches the 12900KS, thats quite a feat for a CPU that will almost certainly consume significantly less energy.
And almost half the price!
That's a big plus.
Y'all need to consider doing away with giving people the benefit of the doubt, because I assure you, there are some out there doing those very things you're in denial about...
I agree with the last sentence though - hardware is outpacing software.
Exactly! I could not give a flying **** about 4k, The Never Ending Dangling Carrot, so for 1080p this CPU, if it proves in HUB's and GN's reviews to be this good, is great for me.

The trusted reviews will show the truth, I just have to make up my mind after that if I want this CPU and skip next gen (Zen4), or skip this one and go for next gen, but that next gen upgarde will cost A LOT more....
 
  • Like
Reactions: Makaveli and King_V
If this turns out to be true, then it may be bigger news than you think. Think about it, All that R&D Intel invested into Alderlake only to be beat by a soon-to-be generation old CPU+3DVcache...even when paired with a slower GPU in a gaming benchmark. Not good.
 
  • Like
Reactions: VforV and King_V
Meh. The days of "fastest gaming processor" bragging rights are numbered if not already dead. Nobody games at 720p anymore where the CPU shows and not the GPU. That's CPU bench only. And fewer and fewer of even the most competitive frame chasing gamers are still gaming at 1080p as they move up to faster higher resolution 2K and 4K VA panels with ever more powerful GPUs on tap. AMD vs. Intel will make zero difference in your gaming FPS with your shiny new for 2022 LG 42" C2 series 4K OLED.
With 1440p 240Hz now a thing and gpu’s that are capable of very high fps in many AAA games the limiting factor can be the cpu. Having gone to a PG279QM a few months ago I can notice a decent improvement over 144Hz but also that my cpu is the limiting factor and not my 3080. It is clearly time to upgrade my 3700X that was no way limiting at 144Hz. This could be the ideal upgrade path for me using my existing motherboard. I plan to buy as soon as released (if I can get it in my basket) and then keep in the box until I have seen the reviews, my expectation is this will sell out day one and not be seen again or for at least a few months. If the reviews are bad it can go back and I will get a 5900X to maximise the life out of my current motherboard and RAM.
 
If this turns out to be true, then it may be bigger news than you think. Think about it, All that R&D Intel invested into Alderlake only to be beat by a soon-to-be generation old CPU+3DVcache...even when paired with a slower GPU in a gaming benchmark. Not good.
If this turns out to be good then adding a huge cache will be zero R&D for intel, they already have foveros working, they just slab a bunch of cache on any CPU they want to. They also have more CPU models and could do that on all of them if they wanted to.

They already did it (adding lots of cache) with broadwell and it wasn't as big of a hit as people thought that it would be.
 
If this turns out to be good then adding a huge cache will be zero R&D for intel, they already have foveros working, they just slab a bunch of cache on any CPU they want to. They also have more CPU models and could do that on all of them if they wanted to.

They already did it (adding lots of cache) with broadwell and it wasn't as big of a hit as people thought that it would be.

If it weren't for the recency of Alderlake and the impending release of Zen 4, then you'd be correct. It wouldn't be a big deal. It's the combination that should worry intel. If a 5800X3D can best an Intel flagship do you even want to imagine what a 7800X3D would do?
 
  • Like
Reactions: VforV
If it weren't for the recency of Alderlake and the impending release of Zen 4, then you'd be correct. It wouldn't be a big deal. It's the combination that should worry intel. If a 5800X3D can best an Intel flagship do you even want to imagine what a 7800X3D would do?
You think physics will break for amd and they will be able to increase IPC indefinitely?
What if next gen is only perf per watt improvement and/or more cores but no real improvement in higher performance per core?

Heck, as everybody already said, what if the 5800x3d only provides these improvements at 720p low or in only a limited amount of games?!
 
  • Like
Reactions: KyaraM
Lol, I'am still rocking my i5@2500K@4.7Ghz + 16GB 2133Mhz RAM + 1070Ti for 1440p gaming, rock solid 60-75 FPS in modern games. This is unbelievable, a 11 years old CPU still going this strong.
 
Meh. The days of "fastest gaming processor" bragging rights are numbered if not already dead. Nobody games at 720p anymore where the CPU shows and not the GPU. That's CPU bench only. And fewer and fewer of even the most competitive frame chasing gamers are still gaming at 1080p as they move up to faster higher resolution 2K and 4K VA panels with ever more powerful GPUs on tap. AMD vs. Intel will make zero difference in your gaming FPS with your shiny new for 2022 LG 42" C2 series 4K OLED.
True.. Not much difference.

I don't game on 4k yet, but in time I can see myself going for the m28u or a m32u monitor in the future.

Will be fun to see how it pulls compared to an overclocked 5800x or 5900x.
 
I'm going to admit, I didn't expect it to actually hold up to the claim, but, having outdone 12900K and 12900KS, I did expect the gap to be as big as it was.

Still, the non-apples-to-apples comparison is of concern with the RAM. Then again, Intel having the faster GPU as well? That's a weird sort of condition to lose under.

It does seem that the, am I seeing hostility here, at a result showing that AMD's claim might have some merit, is really kind of disappointing.

Edit: seriously, even if it wasn't the fastest, even if it was "merely" nipping at the heels of the 12900K/12900KS, the bang-per-buck is not something to be dismissed.
 
I feel that title, “World’s fastest gaming processor” comes with a big caveat. That being that it is not consistent. In other words, it will be highly beneficial if the game can utilise the cache, or, highly sensitive to latencies. The fact that AMD only showcased 6 titles in their slide likely proves this point. I do wonder what will happen when the resolution is increased to 1440p though.

At the end of the day, the reality is that Zen 3 is on its way out. So this being a stop gap solution to somewhat try and dull Intel Alder Lake’s advantage is only going to meet with very limited success. Furthermore, this chip is not exactly cheap. If it is not cheap and Zen 4 is just a couple of quarters away, then I see no reason to recommend buying it. If one is looking to upgrade from say Zen 1 or 1+, then there are cheaper Zen 3 alternatives which may not be as fast as the X3D in latency sensitive apps/ games, but will still provide good performance.
The point here clearly is that the old Zen 3 is competing with the brand new hybrid design of Intel, which Intel should be ashamed of, considering the David vs Goliath kind of gap in R&D budget...
 
Dear Toms.
Please include the i7-5775C in your benchmarks!!

Why?

The EDRAM only aided the integrated graphics performance.

It was slower than, yet more expensive than the i7-4790K that proceeded it, and was superseded by the i7-6700K at a cheaper price only two month after it's release. They barely sold and most retailers stopped stocking them within a few months of their release, They are probably the worst selling retail Intel CPU in the last decade.
 
One thing to consider with this "testing" is that dropping the resolution to 720p and running at lower quality settings can very much skew results. Larger caches become even more beneficial when there's less "new" data to deal with. So, lower resolutions and quality settings can greatly reduce the amount of memory traffic that a CPU sees, and greatly improve cache hit rates. As others have said, we need to see a lot more games, we need to see them all running "comparable" configurations (so if you use DDR4-3600 on AMD, use the same DDR4-3600 on an Intel setup and forget about DDR5, and definitely use the same graphics card).

On that last note, it's entirely possible for a "slower" GPU to perform better than a "faster" GPU when you're running at extremely low settings. ROPs, clocks, pipelines, etc. all come into play. It can be easier, and thus faster, to not have to split up a simple workload like 720p across more shader cores and pipelines. I've encountered plenty of games where even at 1080p medium, "slower" GPUs like the 3080 Ti or 3080 will beat "faster" GPUs like the 3090. And of course, AMD's Infinity Cache on the RX 6000 series and the RX 6900 XT / 6800 XT in particular can make those GPUs look amazingly potent at lower quality settings.

So yeah, wait for more detailed reviews from reputable places rather than a one-off early post looking for traffic and using questionable testing practices. The 5800X3D might be really great at gaming, but a single 720p test result that flies in the face of AMD's own numbers is highly suspect. If AMD thought it could get 20% more performance across a wide selection of games, it would be doing a lot more promoting of the performance.
 
Lol, I'am still rocking my i5@2500K@4.7Ghz + 16GB 2133Mhz RAM + 1070Ti for 1440p gaming, rock solid 60-75 FPS in modern games. This is unbelievable, a 11 years old CPU still going this strong.
(not a pertinent discussion to be had in this thread, but...) "Going" and "Going Strong" are different things. I'd put good money that your 2500K is holding back your 1070Ti a fair bit.
 
Last edited:
Why?

The EDRAM only aided the integrated graphics performance.

It was slower than, yet more expensive than the i7-4790K that proceeded it, and was superseded by the i7-6700K at a cheaper price only two month after it's release. They barely sold and most retailers stopped stocking them within a few months of their release, They are probably the worst selling retail Intel CPU in the last decade.
Ehhh, it didn't just help the IGP. The 5775C matched, or more frequently beat the 6700K in gaming benchmarks because of the eDRAM cache.
Yes, availability was very poor, and the eDRAM cache was seemingly too expensive to be carried forward and rising DDR4 transfer rates ate away at its efficacy, as well as core-count increases with 8th gen.
I still think its a good talking point to bring up in a review article like this. Not many people know about or remember the i7-5775C. But it essentially achieved the same design goals as AMDs 3DVcache on the 5800X3D.

 
  • Like
Reactions: JarredWaltonGPU
The real question is will any of this actually matter in real life gameplay? No one plays at 720p on low settings ..... Lets see how they compare in a real like setting like 1440p with graphics cranked to high which is how most people play their games. unless it's a competitive shooter .....

Then you have to take into account that in productivity applications and single core applications it's going to fall behind the 5800X and possibly even the 5700X because of it's lowered base clock, boost clocks and voltages. Most people playing single player intense graphics games would probably not even notice a difference in gaming and will notice a difference in productivity. And a noticeably lighter wallet for little to no actual gains except bragging rights .....
 
Last edited:
Lol, I'am still rocking my i5@2500K@4.7Ghz + 16GB 2133Mhz RAM + 1070Ti for 1440p gaming, rock solid 60-75 FPS in modern games. This is unbelievable, a 11 years old CPU still going this strong.

I have a i7-3770k system and a i9-12900k system. I've used the exact same 1070ti video card in both of them. There is a HUGE difference in performance across the board between the two. It's not even close.

So yeah, to you your 2500k is "going strong". To everyone else you're driving a 77 Ford Pinto wagon.
 
The real question is will any of this actually matter in real life gameplay? No one plays at 720p on low settings ..... Lets see how they compare in a real like setting like 1440p with graphics cranked to high which is how most people play their games. unless it's a competitive shooter .....
Yes, it's always good to "save" readers from themselves by introducing/discussing "real life" metrics that accurately portray the "typical" use case. By focusing on the 720p resolution, you're missing the point of the test though. Also, you're only going to see differences when cache-thrashing is an issue. There will be many metrics where 3DVcache will be rendered useless, AMD even posted slides of a handful of games showing varying improvements to be had by 3DVcache.

Also remember that this info is coming from a site that's breaking NDA to generate site traffic. What metrics would you publish to generate clicks?
Most people playing single player intense graphics games would probably not even notice a difference in gaming and will notice a difference in productivity. And a noticeably lighter wallet for little to no actual gains except bragging rights .....
I mean....How many people own 3080Ti/3090/3090Ti/6900XT GPUs that are far past the efficient price/performance curve. How many gamers are using CPUs with >12 threads? How many people buy 32+GB of RAM. How many people have ATX mobos with only a single GPU installed? etc etc. Of those categories, there's guaranteed to be a fair % of owners that are "wasting their money" on unused features. Bragging rights is an enormous factor in aftermarket PC components.

I can say, I'd rather upgrade to a 5800X3D for my final AM4 CPU than a 5900X. Although the 5700X is going to murder all of them in terms of price/performance if you need/use 16 threads. Not sure why AMD thought that was a good idea to launch ahead of the 5800X3D....for another thread discussion
 
Last edited:
So going Wider with the Decoder like Intel has with 5-wide decode instead of the 4-wide decode that Intel has used for 2x decades+ is a BIG change of pace.

But given x86's Assembly architecture, I see a limit at 6-wide decode due to the nature of x86 Assembly having a max of 6 parts at best due to it's VLIW (Variable Length Instruction Width) and decoding all 6 parts in 1x Cycle is the best you can possibly get and making sure that consistently happens is critical to improving performance.
VLIW stands for Very Long Instruction Word, not Variable Length Instruction Word.

Decode width is not how many 'parts' of a single x86 instruction can be decoded simultaneously, it's how many x86 instructions can be decoded simultaneously. So there's no hard limit on decode width, so long as the rest of the pipeline can keep up. Also, Intel has already moved to 6-wide decode with Golden Cove (Alder Lake).
 
Last edited:
  • Like
Reactions: hotaru.hino
VLIW stands for Very Long Instruction Word, not Variable Length Instruction Word.

Decode width is not how many 'parts' of a single x86 instruction can be decoded simultaneously, it's how many x86 instructions can be decoded simultaneously. So there's no hard limit on decode width, so long as the rest of the pipeline can keep up. Also, Intel has already moved to 6-wide decode with Golden Cove (Alder Lake).
Then there's no reason to stop increasing decoder width as time moves on.
 
The real question is will any of this actually matter in real life gameplay? No one plays at 720p on low settings ..... Lets see how they compare in a real like setting like 1440p with graphics cranked to high which is how most people play their games. unless it's a competitive shooter .....

Then you have to take into account that in productivity applications and single core applications it's going to fall behind the 3800X and possibly even the 3700X because of it's lowered base clock, boost clocks and voltages. Most people playing single player intense graphics games would probably not even notice a difference in gaming and will notice a difference in productivity. And a noticeably lighter wallet for little to no actual gains except bragging rights .....
Not a chance will it lose vs Ryzen 3800X in applications, because it still has high IPC... it's Zen3 vs Zen2.

As for bragging rights, maybe, but how about 12900KS for bragging rights?

Also at that price point it's not cheap, but not very expensive either... again 12900KS looks worse and worse every day vs 5800X3D, even the 12900K stands to lose a lot, if it's indeed slower and also being more expensive.

There is only one hope for intel fanbois, that this benchmarks is so flawed that the trusted reviews will show the complete opposite and 5800X3D is actually much much worse.

In any other scenario where 5800X3D scores in games close to what we see here, intel looses. That will be the cold, hard truth.
 
I think consumer-level CPUs are very ahead of game developers.
I agree with the last sentence though - hardware is outpacing software.
The unfortunate thing is software can't really keep up with hardware in a consumer setting. The latest Steam Hardware Survey indicated that 6C CPUs are now the most common configuration, though by a percentage point over 4C CPUs. Though I also caution that Steam Hardware Surveys are also very much likely biased towards gaming configurations, and it's highly likely that 4C CPUs are the most common configuration still. But otherwise, most game developers won't design a game that doesn't perform well or at all on the most common configurations.

I'm sure in more high performance applications like servers and data centers, software is deliberately designed to gobble up whatever CPUs it can find.
 
"Hardware is outpacing Software"

looks at raytracing, accurate physics and global illumination

Hm... Yeah, not quite.

The big majority of people is still on hardware that struggles with moving the quality sliders to the right, so it's more of a chicken and the egg situation.

Regards.
 
  • Like
Reactions: alceryes