News Intel Arrow Lake CPUs benchmarked on Z890 motherboards — Core Ultra 7 265KF up to 4% faster than Core i9-14900K in Geekbench 6

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
>So you're defining "real world" by your own arbitrary view here and for some reason you don't get that.

I was afraid this is what it would come down to. Ignoring the issue at hand and delving into specious attacks like "arbitrary view." There's nothing arbitrary about saying "GPUs are bottlenecks in real-world gaming use," but whatever works for you.

>If you're in the market for a $300 video card why would you be even considering CPUs that cost more than it if you're into gaming?

Sure, attacking people's motivation is another way to avoid the issue.

>It's fine that you're incapable of seeing the value in something that isn't tailor made to what you want, but for anyone who's willing to see how it may apply to their situation it's valuable.

I do get it. The status quo is fine for you, and I suspect, for many. As said earlier, I have no interest in convincing you, or others, otherwise. Feel free to get in the last word.

But for @JarredWaltonGPU, if you're reading, it'd be great to see a CPU/GPU test where current CPUs are tested a "regular" GPU like 4060 or 7600XT. I do truly like to see how much CPU differences matter in gaming for the 99% of us.
That already exists. Check the review of the 4060, look at how many fps it provides at your preferebale resolution, and then go check a cpu review and see which cpus maxes it out. It's not that hard man.
 
>That already exists. Check the review of the 4060, look at how many fps it provides at your preferebale resolution, and then go check a cpu review and see which cpus maxes it out. It's not that hard man.

Great! Walk me through it.

The 4060 admittedly isn't a great choice, as it would bottleneck most current CPUs, so gaming perf would end up the same for all.

Let's pick 4070 Super at $600, which is already a pretty hefty outlay for the majority. Game res would be 1440p. For CPU, let's say I'm deciding between 12400, 13600K, 7600X, 7800X3D, and 5700X3D. For simplicity, we'll ignore the buck portion and just look at the bang for this compare.

All these CPUs have gaming benchmarks numbers paired w/ 4090 (or maybe 3090). How do I, Joe Blow average buyer, convert those to 4070S numbers, and if there's any GPU bottlenecking, without any "extrapolating" guesswork? Please, do tell.


To be completely transparent, the point I'm raising is mainly rhetorical, as I can find "bottleneck threholds" for GPUs elsewhere (ie not on THW). I'm an enthusiast, and I know how to search. But the point is still valid for the avg reader who tend to take these gaming benchmarks at face value, and end up overbuying on "gaming CPU" because the benchmarks said one is "better."
Again, not that hard. If the CPU can get 200 fps with a 4090 at 720p then it can get 200 fps at any resolution with any GPU.

So you just check the 4070 super review at 1440p. According to techspot it can get 105 fps @ 1440p in their gaming test suite. Now you check techspots CPU review and find what CPU can get 105 fps and you are good to go.
 
Again, not that hard. If the CPU can get 200 fps with a 4090 at 720p then it can get 200 fps at any resolution with any GPU.

So you just check the 4070 super review at 1440p. According to techspot it can get 105 fps @ 1440p in their gaming test suite. Now you check techspots CPU review and find what CPU can get 105 fps and you are good to go.
If its not hard, but most people don't know that they should cross reference multiple reviews to see what CPU they need, some information on what CPU do you need should just be included in the review.

How hard is is to add a column saying
Recommended CPU for max FPS e-sports (5800X/12600K)
Recommended CPU for 1440P/High gaming (3600X/10600K)
Recommended CPU for 4K/High gaming (2600X/8600K)

The unfortunate truth for Intel and AMD and tech sites is that getting the latest CPU just doesn't matter much for gaming. The people sticking with their Intel 9000 series for 1440P gaming with a 3080 are making a perfectly logical decision.

There are currently youtube videos on how dissapointing the 9000 series is when the truth is that CPU upgrade don't matter much at the moment.

Techpowerup shows the 7800X3D as 10 percent faster than a 9700X and 15 percent faster than a 7700X and 18 percent faster than a 5800X3D at 720P, but who cares - does it really matter if you get 600 or 700 FPS in counterstrike?

All four CPUs are within 5 percent of each other at 1440p and 2 percent of each other at 4K - they are pretty much equivalent for gaming. Any recent Intel K series: 12700K, 13600K, 14600K etc are also equivalent to a 7800X3D for gaming.

For anyone who doesn't have a 4090, the CPU matters even less.
 
  • Like
Reactions: baboma
All well and good and we appreciate the reports, Aaron. But at what cost do we get those increases? We see this is done on a Z890 board, now to see it done on a B860 board.
And what about board pricing? Will that increase be stagnated by board prices that will drive us nuts as seen by the crap shoot AMD did with the AM5 boards?
 
But for @JarredWaltonGPU, if you're reading, it'd be great to see a CPU/GPU test where current CPUs are tested a "regular" GPU like 4060 or 7600XT. I do truly like to see how much CPU differences matter in gaming for the 99% of us.
I've often wanted to add a section doing something like "performance with a system you might actually use," particularly with budget and mainstream GPUs. It's unfortunately a ton of work, and even best-case would require triple the testing to do something like Core i5, i7, i9 or Ryzen 5, 7, 9 with such graphics cards.

But when I update my testbed this fall/winter (that's the plan, probably to Ryzen 7 9800X3D or whatever the single CCD X3D chip gets called), I may see about using a Core i5-13600K as a more mainstream option to add some extra tests on lower-end GPUs. I'm reasonably confident that stuff like RTX 4070 Super and below, and RX 7900 GRE and below, won't see more than a 5% delta in most cases when dropping down to a Core i5-13600K, but there will be outlier games where it's a much more noticeable deficit.
 
  • Like
Reactions: baboma
Including a mainstream CPU in reviews for new mainstream GPUs that people might upgrade too would be a nice addition.

A lot of people will probably be able to upgrade to a 5070 or an 8800XT without also upgrading their CPU.
It would be really nice to see that there is only a 5 percent difference between the 13600K you already have and a 9800X3D
 
Is Geekbench relevant?


Also, as far as gaming performance is concerned, the average is not always a good indicator of performance. Sometimes one game will run poorly on a AMD chip or will not like one of Nvidia, AMD, or Intel graphics.

So if it's a very important game, don't look at numbers only.
 
Is Geekbench relevant?


Also, as far as gaming performance is concerned, the average is not always a good indicator of performance. Sometimes one game will run poorly on a AMD chip or will not like one of Nvidia, AMD, or Intel graphics.

So if it's a very important game, don't look at numbers only.
For years it isn't really relevant IMO, those benches usually favours one side of the duopoly for optimized tasks, more often than not you have to look at some general benchmarks like Cinebench or even CPUZ bench for single and multi thread relative performance, then look at your game of choice/work suite like Adobe for another cross reference and then just go for your preferred brand, usually the % difference isn't that much of an impact on the same generation, like back when I upgraded from Sandy Bridge 2600k to Alder lake 12700KF, doing batch lightroom processing of 1000 24MP raw photos took somewhere around 25min to 10min, % increase wise is great, but even for day event you rarely exceed over 1k photos needed processing, and when comparing the 3-6 hours processing time, that 15min gain is practically not important, and for games, more often it's GPU bounded nowadays, and rarely will ppl pair something like an i5/Ryzen 5 with a TOTL 4090, with the totl cpu options, a 10% in framerate likely result in like 5fps, which more often than not didn't affect smoothness, you either both laggy or both running fine.
 
  • Like
Reactions: 35below0
Status
Not open for further replies.