News Ryzen 7 5800X3D Beats Core i9-12900KS By 16% In Shadow of the Tomb Raider

To bring back AMD's official announcement slide:

2022-01-04_21-51-25.png
 
I feel that title, “World’s fastest gaming processor” comes with a big caveat. That being that it is not consistent. In other words, it will be highly beneficial if the game can utilise the cache, or, highly sensitive to latencies. The fact that AMD only showcased 6 titles in their slide likely proves this point. I do wonder what will happen when the resolution is increased to 1440p though.

At the end of the day, the reality is that Zen 3 is on its way out. So this being a stop gap solution to somewhat try and dull Intel Alder Lake’s advantage is only going to meet with very limited success. Furthermore, this chip is not exactly cheap. If it is not cheap and Zen 4 is just a couple of quarters away, then I see no reason to recommend buying it. If one is looking to upgrade from say Zen 1 or 1+, then there are cheaper Zen 3 alternatives which may not be as fast as the X3D in latency sensitive apps/ games, but will still provide good performance.
 
I just realized how long the names of both CPU are. Someone like my dad would probably going to have a hard time remembering that mumbojumbo of alphanumeric model names
 
  • Like
Reactions: gg83
Meh. The days of "fastest gaming processor" bragging rights are numbered if not already dead. Nobody games at 720p anymore where the CPU shows and not the GPU. That's CPU bench only. And fewer and fewer of even the most competitive frame chasing gamers are still gaming at 1080p as they move up to faster higher resolution 2K and 4K VA panels with ever more powerful GPUs on tap. AMD vs. Intel will make zero difference in your gaming FPS with your shiny new for 2022 LG 42" C2 series 4K OLED.
 
Im honestly shocked but at the same time, who even cares? Like 10tacle said no one is actually going to make use of this because the people who can afford it are playing at 4k with their 3080 ti. No one who is just a 1080p gamer is going to get this. I think consumer-level CPUs are very ahead of game developers.
 
Meh. The days of "fastest gaming processor" bragging rights are numbered if not already dead. Nobody games at 720p anymore where the CPU shows and not the GPU. That's CPU bench only. And fewer and fewer of even the most competitive frame chasing gamers are still gaming at 1080p as they move up to faster higher resolution 2K and 4K VA panels with ever more powerful GPUs on tap. AMD vs. Intel will make zero difference in your gaming FPS with your shiny new for 2022 LG 42" C2 series 4K OLED.
Mobile Hand Helds like Steam Deck & Aya Neo are using 720p, so to say that "Nobody" is using it is disingenuous.

But that's an entirely new genre of Portable PC gaming and doesn't affect this CPU.
 
  • Like
Reactions: ottonis
Like 10tacle said no one is actually going to make use of this because the people who can afford it are playing at 4k with their 3080 ti. No one who is just a 1080p gamer is going to get this.
I think consumer-level CPUs are very ahead of game developers.
Y'all need to consider doing away with giving people the benefit of the doubt, because I assure you, there are some out there doing those very things you're in denial about...
I agree with the last sentence though - hardware is outpacing software.
 
Meh. The days of "fastest gaming processor" bragging rights are numbered if not already dead. Nobody games at 720p anymore where the CPU shows and not the GPU. That's CPU bench only. And fewer and fewer of even the most competitive frame chasing gamers are still gaming at 1080p as they move up to faster higher resolution 2K and 4K VA panels with ever more powerful GPUs on tap. AMD vs. Intel will make zero difference in your gaming FPS with your shiny new for 2022 LG 42" C2 series 4K OLED.

There are still quite a few MMOs that still heavily rely on single thread CPU performance, even maxed out graphically at 4k.
 
  • Like
Reactions: VforV
My middle school daughter has been asking for the 5800X3D - and I will get it for her. She doesn't want it for gaming. She is more curious about how cache size effects performance. Having overclocked her first CPU (FX 8370) and PBO Curve Optimized her Ryzen 5600X, she understands there is a diminishing return with adding more power for more frequency. Heck, we got a 4.35 all core overclock on her little sister's Ryzen 3600 by under volting the chip. She has seen research articles detailing the effect of cache size on performance and wants to experiment with it herself.

Encouraging my daughter's interest in computer engineering and science is worth the price tag.

Personally, I think it is time we start looking past frequencies and core count for performance gains, unless all you care about is your Cinebench R23 score. But, besides gaming, what tasks will be influenced by cache size? Any thoughts?
 
Personally, I think it is time we start looking past frequencies and core count for performance gains, unless all you care about is your Cinebench R23 score. But, besides gaming, what tasks will be influenced by cache size? Any thoughts?

There should be lots of Professional Applications that use large data sets that should see "HUGE Benefits" from having a massive L3 $.

But at the end of the day, the new frontier in CPU Architecture design is lead by Apple's M1.

Having 192 KiB of L1(I$ & D$) is going to change things for the better for everybody else in the industry.

I was wondering which CPU company would hit 192 KiB of L1(I$ & D$) first. I never really expected Apple to be the company.

Chips & Cheese did a Simulation Run with larger caches and found that all simulations point to Larger L1/L2/L3 is the future of CPU Architecture performance improvements on top of IPC level improvements in each CPU core.

We can always put in more cores, but Frequency is running into a brick wall between 4-6 GHz normally depending on how wide your CPU Architecture is and the thermals / power consumption is getting crazy as you scale up the Core's Frequency and have all that heat in a tiny area given modern Silicon Process Tech.

So going Wider with the Decoder like Intel has with 5-wide decode instead of the 4-wide decode that Intel has used for 2x decades+ is a BIG change of pace.

But given x86's Assembly architecture, I see a limit at 6-wide decode due to the nature of x86 Assembly having a max of 6 parts at best due to it's VLIW (Variable Length Instruction Width) and decoding all 6 parts in 1x Cycle is the best you can possibly get and making sure that consistently happens is critical to improving performance.
 
CapX's own 12900k+DDR5 6400 benchmark shows it doing 220+ FPS, but they used a much slower DDR5 kit for this review, making the 12900k and 12900ks seem slower.

Wait for final reviews.
There would need to be at least two comparisons....

Running it 'from the box' with officially supported memory speeds (isn't DDR5-4800 all that is 'officially' supported?), and then again with both using higher clocked/overclocked RAM kits will be interesting... (Having recently seen some of the huge gains shown by a 12900K with DDR5-6400 , as shown below, it will prove interesting!)

View: https://www.youtube.com/watch?v=31R8ONg2XEU
 
Peruvian publication XanxoGaming shares the first gaming benchmark for AMD's Ryzen 7 5800X3D processor.

Ryzen 7 5800X3D Beats Core i9-12900KS By 16% In Shadow of the Tomb Raider : Read more
XanxoGaming executed the benchmarks in conjunction with the developer of CapFrameX, a helpful tool to analyze frame times. The publication tested at a 720p (1280 x 720) resolution with low custom settings to discard the graphics card as the bottleneck and remove it from the equation. It’s essential to consider that the Intel and AMD systems used different hardware, which XanxoGaming has admitted is not an apples-to-apples comparison.


The Core i9-12900K and Core i9-12900KS systems have a GeForce RTX 3090 Ti and DDR5-4800 C40 memory, whereas the Ryzen 7 5800X3D system was on a GeForce RTX 3080 Ti with DDR4-3200 C14 memory. So despite the Intel testbed having a better graphics card, the AMD system won by a fair margin, making the feat even more impressive.
"720p (1280 x 720) resolution with low custom settings to discard the graphics card as the bottleneck and remove it from the equation."
Doesn't this decrease the amount of ram/cache that the benchmark needs?! Or am I completely off there? Anybody with any knowledge on that?
It certainly heavily decreases the amount of GPU needed so doing it with less GPU is not really impressive.
If it also decreases the amount of cache needed, which would make sense seeing the 1080p results from amd themselves, then this makes it less impressive and not more.
 
  • Like
Reactions: KyaraM
"720p (1280 x 720) resolution with low custom settings to discard the graphics card as the bottleneck and remove it from the equation."
Doesn't this decrease the amount of ram/cache that the benchmark needs?! Or am I completely off there? Anybody with any knowledge on that?
It certainly heavily decreases the amount of GPU needed so doing it with less GPU is not really impressive.
If it also decreases the amount of cache needed, which would make sense seeing the 1080p results from amd themselves, then this makes it less impressive and not more.
its quite funny. just like the slow ddr5 used in the intel system. since this "benchmark" is not gpu limited you can actually gain up to 30-50% more fps based on the ddr5 overclock
 
DDR5-4800 CL40
... Is the DDR5 math the same for latency? So that would be 16.66 ns?
Are DDR5 timings usually that awful?

Compared with an okay DDR4-3200 CL14 latency of 8.75ns.

That seems like it could cause a noticeable difference in a benchmark that is latency sensitive.
 
Cautiously optimistic, for sure. I don't care so much about the comparisons against Intel here, but against the 5800X. More than the 12900K/S/F, this CPU will do or die against AMD's 5800X and 5900X. Remember AMD touted the 5900X as their best gaming chip when they launched.

Regards.