KirbysHammer
Reputable
juanrga :
goldstone77 :
juanrga :
goldstone77 :
Now let's look at another review for the same game with directX 11 and 12 up to 2160P. I arranged the benchmarks by resolution and DirectX 11 to 12 support for easy comparison.Click here for a link to the total review I posted in ThreadRipper Mega Thread.
I enjoy this continuous change of the review used when flaws or issues in the former review are mentioned.
At least I make the attempt to quantify issues over a given range of reviews. You are mostly just argumentative, and if you find 1 flaw you throw out the entire review as invalid. You pick and choose values to suit your own agenda. You know the 1700 is a sub $300 CPU, and is the same CPU as the 1700X and 1800X, but you always try to use the later when it suits your agenda. In an attempt to skew the view instead of looking at it holistically.
goldstone77 :
juanrga :
Moving up in resolution drastically reduces the gap even in Hitman, which is one of the most drastic FPS example between Intel and Ryzen. Skylake-X as said before many times, and overall it's not worth the cost of the CPU and Platform cost compared to Ryzen the 1700.
Increasing resolution generates a GPU bottleneck and the fastest CPUs are idle, awaiting to the GPU to do its work
We all know this, I use this to emphasize the point that unless you are gaming at low resolution with a 144HZ monitor the differences in FPS are mostly academic. For newer games or optimized games the average FPS differences almost disappear at 1080P. What are you getting for the extra $300 in CPU cost? Don't forget the extra $150-400 in motherboard costs? ~$50-100 cooling solution? 23% faster blender render?
Ryzen was 8% slower than the 7820X in this test when compared clock-for-clock and 23% slower once the 7820X was overclocked to 4.5GHz
juanrga :
This GPU bottleneck makes slower chips to appear faster than they are really. This GPU bottleneck is the reason why AMD wanted reviewers to test RyZen at higher resoliutions. And it is also the reason why AMD public demos, before launch, used higher resolutions.
They can't appear faster if they aren't doing any work. The massive amount of broken unoptimized games is why they wanted reviewers to use higher resolutions, and preferably ones they already knew would show good performance vs. Intel. I'm done talking about the subject those with eyes can obvious see the advantages and disadvantages.
(i)
That Hardware Unboxed review doesn't have "1 flaw". It has half-dozen of serious flaws, which invalidates completely the review doing it useless.
For the other reviews I have demonstrated the dirty tricks and flaws in the part is being discussed. For instance you did bring us IPC measurements from ArsTechnica and I demonstrated how they inflate the IPC for AMD chips. I didn't comment in the rest of the ArsTechnica review, maybe the rest is good maybe it is not. I didn't check.
The 1700 is not the same chip than the 1700X or the 1800X. That is why AMD prices them differently. I have compared the three chips when discussing performance/price. However, you only want to compare the 1700, because it maximizes performance/price and fits your narrative.
Same happens with mobos. As I have noted and how one moderator has noted you cherry picked the cheapest possible AM4 board (B350 based) to compare with the more expensive X299 board, as if both boards were comparable in quality and/or performance.
But your narrative is invalidated when one considers a broad selection of hardware.
(ii)
Reviews don't make 1080p and 720p game tests because some people play games at that resolution. Those low-resolution test (so-named CPU tests) give an idea of the true gaming potential of the CPUs at higher resolutions.
This true gaming potential at higher resolutions reveals in the future, when one updates the GPU to one much more potent and reduces or eliminates the GPU-bottleneck.
(iii) There is no "The massive amount of broken unoptimized games". This is the same weird excuse read during Bulldozer epoch. The reason why RyZen play games worse is because its microarchitecture sucks on latency-sensitive workloads. The Zen microarchitecture has been designed for throughput workloads such as rendering and encoding.
The reason why AMD wanted reviews to test only at high resolutions is because that generates GPU bottlenecks and reduces or hidden the deficit of the Zen microarchitecture. Reviews have reported AMD dirty tactics and attempts to mislead customers:
At this point, you might be left feeling disillusioned when considering AMD’s tech demos. Keep in mind that most of the charts leaked and created by AMD revolved around Cinebench, which is not a gaming workload. When there were gaming workloads, AMD inflated their numbers by doing a few things:
In the Sniper Elite demo, AMD frequently looked at the skybox when reloading, and often kept more of the skybox in the frustum than on the side-by-side Intel processor. A skybox has no geometry, which is what loads a CPU with draw calls, and so it’ll inflate the framerate by nature of testing with chaotically conducted methodology. As for the Battlefield 1 benchmarks, AMD also conducted using chaotic methods wherein the AMD CPU would zoom / look at different intervals than the Intel CPU, making it effectively impossible to compare the two head-to-head.
And, most importantly, all of these demos were run at 4K resolution. That creates a GPU bottleneck, meaning we are no longer observing true CPU performance. The analog would be to benchmark all GPUs at 720p, then declare they are equal (by way of tester-created CPU bottlenecks). There’s an argument to be made that low-end performance doesn’t matter if you’re stuck on the GPU, but that’s a bad argument: You don’t buy a worse-performing product for more money, especially when GPU upgrades will eventually out those limitations as bottlenecks external to the CPU vanish.
In the Sniper Elite demo, AMD frequently looked at the skybox when reloading, and often kept more of the skybox in the frustum than on the side-by-side Intel processor. A skybox has no geometry, which is what loads a CPU with draw calls, and so it’ll inflate the framerate by nature of testing with chaotically conducted methodology. As for the Battlefield 1 benchmarks, AMD also conducted using chaotic methods wherein the AMD CPU would zoom / look at different intervals than the Intel CPU, making it effectively impossible to compare the two head-to-head.
And, most importantly, all of these demos were run at 4K resolution. That creates a GPU bottleneck, meaning we are no longer observing true CPU performance. The analog would be to benchmark all GPUs at 720p, then declare they are equal (by way of tester-created CPU bottlenecks). There’s an argument to be made that low-end performance doesn’t matter if you’re stuck on the GPU, but that’s a bad argument: You don’t buy a worse-performing product for more money, especially when GPU upgrades will eventually out those limitations as bottlenecks external to the CPU vanish.
Agreed. I'm happy to see AMD competing with Intel (and they are, no doubt) but to see reviewers obviously biasing reviews makes me die a little inside.
Basically what these reviewers have done is created an army of people who think AMD chips are the best at everything, skylake-x is garbage, 7700k is a stupid buy, etc. If you dare challenge their opinion you are a stupid Intel fanboy. This just frustrates me to no end.
All because of biased reviewers and a couple of benchmarks that were essentially rigged. I get it, you're sick of Intel. But your job is to review chips, and you shouldn't let your opinion of a company get in the way of an honest review.
Yes, AMD chips have their advantages, especially in tasks where throughout put on lots of cores is advantageous.
But reviewers have just glossed over Intel's higher clocks and better IPC, as well as better latency.