spdragoo :
joeblowsmynose :
spdragoo :
AgentLozen :
Hey everyone. I'm glad you are all interested in the Intel vs AMD debate!
I wanted to clarify why the author of this story used the game testing methodology that he did without getting into a long winded internet war.
Question: Why do we test CPUs in 1080p when elite gamers are more interested in 1440p and 4K?
Answer: 1080p is preferred because its a relatively low resolution where the strengths of the CPU shine the most. The limiting performance factor in higher resolution gaming is the graphics card. Using a higher resolution takes the focus off the CPU. We DON'T want to do this while testing CPUs. Therefore, it makes the most sense to test at 1080p (or below).
I also want to address redgarl's statement about using mainstream GPUs for testing to simulate real world conditions.
Question: Why do we test CPUs with expensive video cards that most people don't own?
Answer: Using a less powerful video card shifts the bottleneck back to the GPU. We want to highlight CPU performance in these tests so shifting the bottleneck defeats the purpose of the test. Its true that most users don't own $800 video cards, but for the sake of testing, it makes the most sense to use an expensive video card.
Here's a follow up question to think about: Do you think testing a CPU with a Radeon HD 7770 would be a good idea? I'm sure that video card is still floating around out there in a lot of machines.
Anyway, thanks for participating on the Tomshardware message boards.
Well said. That was the reason I'd pointed to the Techspot article
here, because they had a very definitive example (using only Intel CPUs, for those that thought there might be a bias) & how when they paired the CPUs with a GTX 1050TI a Pentium G4560 (2C/4T
locked CPU) had "identical" performance to the Kaby Lake i5-7600K (4C/4T) & Skylake i7-6700K (4C/8T) -- & the latter 2 CPUs at stock even had faster clock speeds (~8% faster for the i5, ~15% faster for the i7).
But don't you think that owners of a 1050ti have a right to know that upgrading their CPU from a pentium to a 6700k would
not in real life give them any performance benefit? Don't you see that by
not including the "real world" testing, that the results are misleading to the majority of end users?
I have nothing against creating artificially created scenarios to emphasize testing results. I have issue with these
replacing real world test results altogether, which we are seeing everywhere.
If the pentium owner in my above example had only the artificial results, they may be entirely mislead into buying a shiny new, very expensive 6700k believing that they will get 20% more FPS, but in reality they experience getting ripped off due to "misleading" info contained within the benchmarks.
This clearly isn't an AMD vs Intel issue ... and thank you for the clear example of how not including real-world testing in cpu benchmarks results can be
very misleading.
Actually, that was
my point about the CPU testing: only doing "realistic" test pairings ends up being misleading -- because, as we both agree, the average user would believe that the cheap Pentium is just as powerful as the super-expensive Core i7, when in fact the "equal" performance was only due to the GPU limitation.
That's why CPU testing should be done at a "mainstream" resolution (currently 1080p, but previously was at 720p & even lower back in prehistoric times) but with the top-of-the-line (or possibly #2 on the list) GPU & as much RAM as possible so that the differences in performance can be assumed to be due solely to the CPUs' capabilities.
My argument from the start has never been that we only should do real world testing, you can read back all my posts - I made that disclaimer in almost ever one. If anyone was arguing that we should
only do real world testing, I would argue against that, with you.
Two points -- you say that a gaming benchmark that shows a pentium and 6700k on 1050ti is misleading because it doesn't show the "true" power of the 6700k in non-gaming workloads. We are talking about a gaming benchmark, right? Gaming benchmarks, you know, the ones that show you gaming performance? The whole topic of this discussion ... If I am looking at
gaming benchmarks, I am obviously interested in
gaming performance, and the gaming performance between the pentium and 6700k is EQUAL on the 1050ti as posted prior. So a proper
gaming benchmark should show the real
gaming result, what's wrong with showing reality? Why hide it?
Now let's talk about
productivity; let's say I want to edit videos and I am deciding between keeping my pentium and going with the 6700k, would I look at
gaming benchmarks to determine if my pentium is better? No I wouldn't, I would look at productivity, specifically video editing benchmarks, and as long as we keep results "real-world" I would see the appropriate results there that I was looking for.
So please keep your productivity argument out of a gaming benchmark performance debate ... I am, and have only been referring to the created expectations of CPU gaming performance, based on how CPU gaming benchmarking is done, and how most often these numbers can never represent real world conditions unless one is stupid enough to actually build a gaming rig with a bottlenecked CPU.
I am not arguing that we should
not do any 1080p gaming benchmarks, that was never my argument, sheesh, my argument is, and as I asked be fore, is it appropriate to show 1080p or 720p benchmarks with a
bottlenecked CPU without also showing how the CPU performs as it would in someones rig at home?
If we show only the bottlenecked scores, we shouldn't keep saying "X processor is faster at gaming", we should say "here's what the benchmark shows if we artificially induce a non-realistic situation, and over here is what your real world results look like". That would make me happy.
(for people who also love cars ...)
Just like you can put a modified corvette on a dyno and say "here we have 750hp, and that 2017 Ford GT over there has only 650hp" - if that was the only thing that was said, or al you had was the dyno graph to look at you would believe that the 'vette was the faster car. Put them both on a drag strip or a track and you realize that your assumption based on the dyno graph couldn't be more wrong, and the real-world testing will show what your result will "actually" be. I'm not saying the dyno graph isn't fun or "good to know", but if the verdict on which car is faster was based on that, it would be
wrong.
Just as I would argue in the
opposite direction ... sure the 2700x has more multithreaded chops than the 8700k, so one would assume that it can encode video faster in Adobe Premiere ... the reality is that Adobe has optimized their code to utilize intel iGPUs so that 8700k actually does a better job.
If we disabled the iGP, sure that would show the true performance of the actual CPUs and the 2700x would be winner by far margin, but that wouldn't represent a real-world scenario, now would it? It would be misleading wouldn't it?
So it should be tested as it would in the real world, which means that the 8700k is a better Adobe Premiere CPU than the 2700x ... period. I don't care if you artificially induce a "proper" comparison by disabling the iGPU in the Intel chip, inducing that would be "wrong" wouldn't it? Or does "real world" results not matter enough to you that you think we
should be disabling the iGPU on the 8700k for an Adobe testing? I would argue that it definitely does, and see, I can argue for the blue team as well ... its not about red vs blue.