AMD Ryzen 7 2700X Review: Redefining Ryzen

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

AgentLozen

Distinguished
May 2, 2011
516
2
19,015
22


Hey! Thanks for the post.

The reason that resolutions beyond 1080p aren't tested is because that's when the graphics card becomes the bottleneck. Since the 2700x is the focus of this article, testing higher resolutions leaves its scope.
 

Rogue Leader

Titan
Moderator


In the car world you sound like the Honda/VW guys yelling about how their 4 cylinder makes more HP/L than the V8 in a Corvette. Well thats great, except the Corvette still beat you. Fact is, other than being a general statistic what does it matter?

Now what does matter is at the flagship of the desktop level the 2700X is cheaper, has more cores/threads, and is extremely close in performance, sometimes beating its flagship counterpart 8700k. You can argue semantics (or "cpu architecture" as you said... do you even know what that means?) all you want, matching core/thread for core/thread, but price and level is where people shop at. Nobody is cross shopping the Ryzen 5 with an i7.

PS: I race Volkswagens, and my gaming system is an 1800X with a Vega 64 I built on launch day, in case anyone wants to claim I'm an Intel shill.
 

redgarl

Distinguished


It is also, at the same time, an unrealistic benchmark forcing a 1080 TI at 1080p with a top of the line CPU... totally unrealistic. These result were totally not in par with a realistic built of a 1600x with an RX 580 or 1060 GTX. At 1080p, with anything below a 1070 GTX, you see no real differences.
 
Mar 4, 2018
11
0
10
0
Now, do the tests again with meltdown/spectre applied on intel cpus, as you should.
And you will see a VERY different story, with 2700k destroying 8700k in almost every measure).

(check out anandtech's review to get an idea)
 

jpe1701

Admirable
Mar 13, 2015
1,290
3
6,165
199


OK, thank you. I have the x370 Taichi, but my 1700x is a very poor overclocker. It takes 1.3875vcore for 3.8ghz, so I leave it at stock, but the 2700x looks like l could potentially get 500mhz more right out of the box.
 

Rogue Leader

Titan
Moderator


No, its not. When you are looking to purely benchmark CPU performance in a game, you run the fastest GPU possible to eliminate its performance from the equation completely. There are legit performance differences between an RX 580 and a GTX 1060 in certain games at 1080p, the aim of a CPU benchmark is to purely bench the CPU, hence the use of the same RAM, testing conditions, GPU, etc.
 

AgentLozen

Distinguished
May 2, 2011
516
2
19,015
22


I'm afraid I don't completely understand what you're getting at. I'm interpreting your post as "Why did this article feature a GeForce GTX 1080 FE in the test setup when no one would use that card for 1080p gaming. It's mismatched." I hope I got that correct.

I think my last post summarizes the reason well enough but I'll try and clarify further. The purpose of the benchmarks in this article is to test only the performance of the Ryzen 2700x. The author (tester?) has to make sure that other variables don't taint the results. For example, you wouldn't test software using only 512MB of ram because it would bottleneck the system and produce inaccurate results. However, if you used 128GB of ram, it wouldn't make a difference because the Ryzen 2700x is the bottleneck at that point.

The same logic applies to graphics cards. You shouldn't use a GeForce GTX 650 Ti because the results would be bottle necked by the graphics card. But if you used an overclocked GeForce GTX 1080 Ti, the Ryzen 2700x is the bottleneck in that situation and it wouldn't matter.

I hope I understood your comment and that this example explains why the GeForce GTX 1080 FE used in this test is appropriate.
 

logainofhades

Titan
Moderator
Exactly, when doing a CPU centric review, you want to eliminate as many bottlenecks/variables, as you possibly can. 1080p, with an overpowered GPU, achieves this. At one time, 720p was used for such reviews. Graphics cards have advanced to the point, where 1080p can be used.

GPU reviews use the fastest CPU possible, to eliminate the CPU as being the bottleneck.
 

papality

Honorable
Dec 1, 2012
74
0
10,640
1


Yeah I get all that. The problem is it's not a realistic use case any more. The people who are buying flagship $300+ processors are unlikely to be dropping cash like that in one area then hooking up a 1060 and just chugging happily along. If you are doing that and you're happy, hey, rock on and enjoy it. But the use case of high end CPU + high end GPU @ high resolution and frame rates is far, far more likely, especially since GPU manufacturers don't really advertise 1080p performance any more.
 

papality

Honorable
Dec 1, 2012
74
0
10,640
1


It's hardly an either-or situation. Who is it hurting to *also* have higher resolution benchmarks? Nobody's saying to ONLY use 1440/4K/UW/etc, but because I'm gaming at 1440p, that's the performance area that's relevant to me (and a lot of other people) and the potential upgrade I need to know about.
 

Rogue Leader

Titan
Moderator


Its not meant to be a realistic use case. The article is to compare pure processor performance, period. If you start introducing other bottlenecks to the system then you are not getting a pure number, and the results are useless.

What you want, is a different article showing game performance of typical system builds. Tom's does do that, but at this point in the game with the processors just being released they are not yet relevant.
 

darth_adversor

Distinguished
Jan 16, 2012
74
1
18,635
0
On page 4, under test systems, for both Germany and the US, it states DDR4 2667 and DDR4 3466.

I don't know if there would be much difference between the two speeds, but I just wanted to clarify.

Also, I forgot to thank you for the review. I've been looking forward to this one, I'm excited to see AMD becoming increasingly competitive.

Last, I've never really understood the 99th percentile value analysis. Can someone explain that to me?
 

papality

Honorable
Dec 1, 2012
74
0
10,640
1


Yes, that is what I want. Now I know Anandtech is currently investing their Zen+ review, but in their gaming benchmarks they're showing some pretty significant upgrades at 4K, which is the kind of info I'd expect Tom's (more or less the largest and most popular tech site in the world) to have on a CPU review.
 

FormatC

Distinguished
Apr 4, 2011
981
0
18,990
1


I used for all Workstation things the official clock rate and for OC the possible maximum. This is a normal procedure to be correct. Nobody will use OC'ed things in productive systems. I explained it also in my intro for workstation VGA and CPU, but this part is only in German available.

 

AgentLozen

Distinguished
May 2, 2011
516
2
19,015
22


It's used because the minimum frame rate can be misleading. If there is an outlier, then it might seem like you should expect those outlier frame rates frequently.

Look at this dataset over 10 seconds: 100fps, 100fps, 99fps, 100fps, 97fps, 101fps, 99fps, 100fps, 60fps, 98fps.

The minimum framerate is 60fps. 90% of the time it's dramatically higher than that. 60fps shouldn't represent the general performance of this set. Again, this is why Toms' uses the 99th percentile.
 


You complain about this every time there's a Ryzen review. And I will keep responding with the same words: go back in time starting from around 2000 in CPU reviews (anywhere, not just on TH) and see what resolutions they run. Clue: they NEVER run the maximum/high end resolutions of the period and run the main stream monitor resolutions and/or those that show CPU power. And again I'm going to remind you that a LOT of people are running 1080p 144Hz/165Hz Gsync or Freesync monitors.

Anyway back on topic: Ryzen 2 was as expected: similar to an Intel tick. A few tweaks to the architecture and a die shrink. Most impressive is Ryzen 2's performance increase in minimum FPS over Ryzen 1. I mean just look at GTA V's results with 1700X getting min/avg of 53/76 and 2700X getting 67/91. Also, it's nice to see AMD processors being competitive in Project Cars 2. Slightly Mad Studios finally pacified all the AMD whiners on Project Cars 1 being Intel biased (considering PCars 2 is optimized for AMD-based consoles, I would certainly hope so).

In any event, Ryzen 2 continues on the success on Ryzen 1 and with the game performance boost, even at 1080p, closes in the gap with Intel's 6-core offerings (SMT or not). But for a pure 1080p gaming rig with a Freesync or G-sync monitor, Intel is still the way to go. For a more balanced system between gaming and productivity, AMD is still the winner here as they were with the first generation.

Paul Acorn: is it possible to dump the 2-year old Far Cry Primal and replace it with a Far Cry 5 bench now in future reviews?
 

Rogue Leader

Titan
Moderator


Yes they are, but in most cases the differences are margin of error (2fps) and in other cases its showing basically the same results as at 1080p but at 1080p its more pronounced. At 4k you are introducing GPU insufficiencies again. Now yes it the same GPU, but its running at the edge at 4k meaning its performance may not be completely consistent.

Again, a pure CPU benchmark is no place for any outside forces influencing the performance.
 

papality

Honorable
Dec 1, 2012
74
0
10,640
1


Because it's interesting information relevant to consumers? Trust me, I understand the point of removing the bottlenecks, that doesn't need to be explained to me 10 times, because it's not a response the question being asked.
 

AgentLozen

Distinguished
May 2, 2011
516
2
19,015
22


I became curious when you said that the 2700x was offering a big performance difference in 4K. I wanted to check it out for myself.

Anandtech Gaming Benchmarks
Civ 6 Avg framerate 1080P: 2700x = 88.33, 8700K = 77.31
Civ 6 Avg framerate 4K: 2700x = 86.76, 8700K = 76.49
Civ 6 Avg framerate 8K: 2700x = 56.94, 8700K = 57.15

GTA 5 Avg framerate 1080P: 2700x = 104.02, 8700K = 91.77
GTA 5 Avg framerate 4K: 2700X = 61.34, 8700K = 60.68

In both of these benchmarks, there is a pronounced difference at 1080P. We want to measure this difference. It's valuable to the testing.

At extremely high resolutions (8K for Civ 6, 4K for GTA5), both processors handle EXACTLY the same (almost). If you further examine the frame rates at extreme resolutions, the benchmarks show that the Ryzen 5 2600 outperforms the 2700X by a fraction of a frame. Clearly, this information is useless except to show that the game doesn't crash on either Intel or AMD platforms.

This is the reason Toms uses low resolutions in their testing. If you're curious to see off the wall, lmfao-tastic hardware combinations, then that's the topic of a different article.

In any case, thank you for your contribution to the topic. =)
 

Rogue Leader

Titan
Moderator


What consumers? A 1-2fps difference on average in a game at 4k means nothing to me as a consumer (or anyone for that matter). 2fps is literally margin of error. Clearly you aren't understanding the point of removing the bottlenecks, because you are harping on a result that is being directly caused by a bottleneck.
 

redgarl

Distinguished


It is unrealistic because it doesn't affect the customer while rendering a false sense of superiority.

Basically, you are telling Joe Blo that Intel is better at gaming, however Joe Blo is going to buy a 8400 and a 1600 GTX. If you take a 1060 GTX at 1080p with either Ryzen (one) or Coffee Lake, it doesn't matter. They behave basically the same way because the card is the bottleneck.

What you introduce is scope creep and a behavior only present in at 1080p with a GPU above 500$. At 1440p or 2160p, this is a non issue, however the multi-threaded performance are going to matter.

So basically, this bench is true in what... 3% of the situation in the best scenario with new builts? While the contrary is not even mentioned.

So, the comment about why you should STILL include 1440p and 2160p bench is still valid... it is because you want to give people the whole picture.

What you should always mention is that this is only true when the card is not the bottleneck, which only happens with anything above a 1070 GTX. At that point, I hope that you mentioned it is only affecting people at 1080p @ 144HZ and above, because at 60 HZ, again, it is a non issue.

So, do I have a problem with the way that bench is done... you sure bet I do. It is totally misleading information that doesn't render the real picture.

If you test a CPU at 1080p for gaming, than you should provide a budget, a high end and an enthusiasm perspective. If not, you are just propagating FUD.
 
Status
Not open for further replies.

ASK THE COMMUNITY

TRENDING THREADS