AMD Ryzen 7 1800X CPU Review

Page 18 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


The 1800x was already running at 4.0ghz for the entire cinebench load test, as that is its boost clock speed. There is little point in manually overclocking the 1800x to always sit at 4Ghz. However, you do need to do that for the 1700 to reach 4, therefore it is a downside. A small one, I agree, but present nonetheless.
 
@quilciri I think the 1800X only boosts to 4.0 GHz on lightly threaded loads. Running something like cinebench nT would have lower clocks, while the OCed 1700 would be running 4.0 with all 8 cores under load. Which would explain why the OCed 1700 beats the 1800X in heavily threaded benchmarks.
 


I guess Shadowplay shouldn't make much of a difference. As I understand it, most of the work is done by the GPU, which would actually make it more GPU bound (and the cpu less important). Feel free to correct me, I might be wrong.
 


No your right. Shadowplay uses less than 1% of my CPU, no recording then. The requirements for CPU are minimal too. Streaming is a different story
 

With the number of microcode, BIOS, driver, memory compatibility, etc. bugs identified on many motherboards and benchmarks at launch, I'm guessing most sites are refraining from posting motherboard reviews until most of those get sorted out. It would be pointless to publish reviews while there are still so many launch-day wrinkles to smooth out that will require a re-test which may completely upend the conclusion within weeks.
 
I absolutely never, EVER, buy a new family of computing products until a few months after launch. I like to wait for a few months for the various bugs and quirks to get ironed-out. Kind of the same reason why I never buy a new Windows OS until after its inevitable Service Pack 1 is released.

I'm not a hardcore gamer-- I do some CAD and engineering software as well. So, I am glad to see that the new processor seems to do very well in everything related to those. I just hope Autodesk embraces the reality of potential non-Intel CPU's running their software. I would also like to see how a 1700 does with rendering in Revit 2017.

Again, I'll wait a few months. But I'd love to put together a workstation with the base 1700 and a Nvidia Quadro P6000. That would be an awesome workstation!
 
I mean downclocking the max turbo just to get it to boot (cause windows will use all 8 cores) and then doing dual core and quad core benchmarks, with the MAJORITY of your overclocked cores PARKED.

Even my 7700k doesn't use all 4 cores in most games. I don't even overclock my 7700k this way. I have 3 cores at 5.1ghz and 1 core at 4.8ghz.

STOCK VOLTAGE easy to do on air.
 
To do this stable, you have to wait for better BIOSes 😉

And the 2nd problem is the 4-GHz-barrier, that can't be broken so easy, also not as single-core. You must be lucky, to go higher with your CPU.
 


No, the 1/2GB VRAM issue was nothing compared to this. This was rather monumental, uncovering practices that were so crooked that in the tech industry, only Intel could be called worse. I'll tell what I remember. It was a looong time ago so some details might be off but the gist of it will be correct:

Charlie Demerjian's nVidia exposé was actually twofold. First, he noticed that none of the review sites (including the site he worked for, theinquirer.net) were showing the GTS 250 and 9800GTX+ in the same benchmark. He was dumbfounded by this because the 9800GTX+ was a very recent card and couldn't make sense of it. He went to the person who did the review about the omission and whoever it was told him that "nVidia didn't want it done that way" and at that time, nVidia literally ruled the world of GPUs because at that point, ATi had nothing to counter the GeForce 8800 series and was forced into a position that until eleven days ago, AMD had in CPUs. Charlie also discovered that this wasn't the first time that nVidia had re-branded old hardware as new. The 9800 series itself was nothing more than the G80 GPU from the 8800 series with a die shrink and PCI-Express 2.0 support. He then surmised that the only reason that nVidia wouldn't want the GTS 250 and 9800GTX+ benchmarked together would be if they were the same card, since PCI-Express 3.0 wasn't a thing yet and nVidia hadn't done a die-shrink recently.

The second part of the exposé was a result of him being unable to find solid information showing that the 9800GTX+ and GTS 250 were the same anywhere so he benchmarked the cards himself across a whole ton of games. He not only discovered that they were, in fact, the same card but also that the GeForce cards were only vastly superior to the Radeon cards in the games that most of the sites were using. He went back to the reviewer and grilled him some more. He discovered that nVidia had been sending "guidelines" on the games that it would allow the GeForce cards to be benchmarked on. These "guidelines" were thinly veiled threats that whosoever didn't "play ball" wouldn't get any more nVidia sample cards. At the time, nVidia cards were HFSWTF expensive and most sites couldn't afford to buy them to test. As a result, sites "played ball" for the most part.

Charlie was furious at this because it meant that everyone (including him) was getting screwed. When he saw that not only was nVidia doing this but charging an extra $50 for cards marked "GTS 250" over the same cards that were previously marked "9800GTX+" he made the decision to publicly call them out on it, and he did. What happened next is perhaps the biggest attempted smear campaign against a tech reporter that has ever been undertaken by a hardware manufacturer. Suddenly everything he said was being questioned by certain sites under direct pressure from nVidia. They were careful not to make any direct allegations of lying but they made it obvious that anything, if said by Charlie Demerjian was semi-accurate at best (I think that this is where he got the name for semiaccurate.com).

In the end, the consciences of some site owners caused them to come forward and corroborate his story. Eventually, once the cat was out of the bag, all the affected sites stepped forward. The problem was that the nVidia shills were still hard at work trying to destroy his reputation in retaliation for exposing NVidia's practices. As a result, Charlie waged a war of his own against nVidia and it was actually he who pointed out the wood screws in "Dear Leader's" Fermi sample when he lied and told his investors that Fermi was in fact ready and finished.

You can still see a lot of what happened at theinquirer.net in the "Charlie vs. nVidia" section.

<MOD EDIT: Fixed quote for readability>
 


Well, I guess that this makes you TMTOWTSAC, the King (or Queen) of missing the point. His point wasn't WHEN the CPU was made and the FX-8350 overtaking the i5-2500K was just a symptom of the root point that he was trying to make. His point was that the low-resolution gaming test isn't the 100% accurate test that everyone seems to think it is. He showed that even at low resolutions, changing the GPU has an effect regardless of how much you're trying to isolate the CPU.

His other point was that at the levels we were seeing from all CPUs, even the FX-8350 would be indistinguishable in modern games from even the i7-7700K. This is because the most important thing about a CPU in gaming is not about it being wickedly fast, it's about it being "fast enough" to stay out of the GPU's way enough to get smooth frame rates. As long as your minimum fps is at least 30, you won't be able to tell one from the other. And he also proved that just because one CPU has lower frame rates than another today, it doesn't mean that the other CPU will out-live it. The Core2Quad certainly didn't even come close to outliving the Phenom II and they were neck-and-neck.

I hope you understand now what it was he was trying to say.
 
Back when the Core i7-7700K launched, I recommended it only to people who were already in the market for the Core i7-6700K. Think about it "If you were planning to buy the old model, buy the new one instead, they're priced almost equally".

Now, you were probably one of the guys who said I was in the pocket of Intel for saying that. Since I never got a check from Intel, I'm relying on people like you to pay up. I still drive an 11-year-old Chevrolet given to me by my mom when she quit driving, and I have my eye on something a little more exclusive 😀

 
There's a simple reason reviewers should test in as many possible configurations and resolutions as possible all the time: data points.

Since the time they have is obviously never enough, they have to concentrate the testing they do in what they consider adds more value to purchase decisions or, some cheap reviewers, can get hits displaying information in a biased way (ironically, in these sites you see one side claiming fairness by the allied camp, lol).

In particular to gaming resolutions, I still think 1920x1080 is relevant (specially 120Hz+). The Steam survey backs up every single reviewer out there in this regard. Other higher resolutions is less then 10% combined (can't remember the exact %s, but around there). Also, it is still taxing enough to push CPUs and make them the bottleneck as long as the eye candy depends on the CPU. Ironically, on this point, Crysis would be a *very* good benchmark even to this day. They had some of the game engine eye candy run a lot of stuff in the CPU.

This being said, I do consider 4K to be another necessity going forward and I'd love for reviewers to do both as *mandatory* and divide their times accordingly. For the industry to move in that direction, prices must justify it first AND it has to be tempting. The reason is simple, CPUs are still being taxed, depending on the engine, going into higher resolutions *and* certain configurations.

Also, please, MMORPGs. You guys used to test WoW, why not test GuildWars2 with the new maps? They are plenty taxing. I believe, as benchmarks, MPs and MMORPGs are excellent data points for assessing CPUs. Streaming not so much, but if you ever find a way, by all means. More data never hurts.

And I do not think Toms is biased, otherwise they would have only used 4K like AMD wanted according to the rumor mill, right? ;D

Cheers!
 

Higher resolutions mainly push GPU fill rate and shaders. Until GPUs are powerful enough to push frame rates that people who want 4k Ultra can be happy with, it does not make much sense to bring clearly GPU-bound benchmark scenarios (FX-8350 scoring almost the same as Ryzen, i3-7350k and the i7-7700k) to a CPU fight.

Why not benchmark MMOs? One word: repeatability. It is usually next to impossible to produce repeatable scenarios in MMOs. Mobs spawn in different areas and quantities, you have no control over how many of what spells other players may use in your test zones nor when, most MMOs don't allow scripting paths to repeat exactly the same one between runs, etc. Far too many exterior variables that can invalidate results.
 


Thanks for ignoring this: "Crysis would be a *very* good benchmark even to this day. They had some of the game engine eye candy run a lot of stuff in the CPU".

Also, I don't care if it's hard to test MMORPGs. It is an important test to perform. There is no way in hell you can deny that. It is up to you reviewers to be smart about it and do it. Yes, it sounds like I'm being unfair, but at the end of the day, being smart about what you test and what brings more to the table will let Mr Crashman buy his new fancy car he has planned.

In science, you can always penalize data points with error margins and repetition, or just be explicit on all variables. This is just data gathering; it's not rocket science. There are several methods and processes out there that allow you to know how to manipulate the data so the data is presented in the best possible way to describe what you want. And please, don't take this as a "oh, so you want us to fake results?!". Please, just don't.

Cheers!
 


Actually, I also agree that it is not such a good idea to benchmark mmorpgs. My reason for that is that when you REALLY are cpu bound is when there are lots of players fighting lots of things with effects, and that is the hardest thing to produce consistently. In order to produce it consistently, you would require huge amounts of time and effort, that could be better spent benchmarking several extra games instead.

BUT there are some ways of doing it repeatably (or semi-repeatably) that I can think of:

1- Empty place, no people, some or many mobs, only 1 person fighting: Very repeatable, but useless as a benchmark since there is no taxing workload. Useless

2- Main commercial city filled with people on a busy day at the highest time: Repeatable enough, as long as it's done in the same day at the same approximate time. Somewhat taxing, but no effects and no fighting. The main disadvantage is that a result from a certain date is useless in any future, as game population varies too much on the medium term, so no comparability with future hardware. The benchmark results are only good for one review. Basically useless

3- Set up your own private server and configure many bots to fight repeatedly against certain mobs: Perfect repeatability and realistic taxing, but it would cost too much (money and time) to benchmark only one game. Even worse if you want several mmorpgs. But an advantage: Setting up just one server is good enough for several years, since mmorpgs don't change much over time, and only a few are popular at any given period of 3-5 years.


Last option would be the one that is feasible (although quite costly for only 1 game).


I don't know almost anything about crysis, so I wont comment on that.
 
Since we are a community of tech related people, maybe someone knows somebody who is willing to donate the programming effort or make it open source, so that any hardware review site can do mmorpg benchmarks from now on. I would, but I don't want to
(And don't know how either)
 


He's making two arguments, one of which was that the performance of the 8350 in newly released games improved steadily against that of the 2500k over the course of 4 years. That was the main point I was addressing in my post, I'm sorry if it wasn't obvious. Insofar as low resolution benchmarking not being predictive of performance, it's a lot murkier.

The issue with the comparisons he makes is that he hasn't controlled all the variables. The benchmarks he's using come from www.computerbase.de, and run through Google translate. The first problem is that the benchmarks from 2012 were made with an 8350 vs i5 2500k, both with a GT680, and Windows 7 64 bit Ultimate, Service Pack 1. The next set he compares them to were made with an 8350 vs i5 2500k, both with a GTX Titan, and Windows 8 Enterprise updated through Jan 2013. His performance predictions were predicated solely upon changing the GPU from the 680 to the Titan. But, he fails to take into account the gaming performance differences and threading optimizations between Win 7 and Win 8.

http://www.tomshardware.com/reviews/windows-8-gaming-performance,3331.html

The second problem is that he's using an aggregate bench comprised of a suite of games that changes between the benchmarks. New games are added, old games are removed, and some of the games have hard fps limits which flatten the results. When he says that the lead went from 10.4% to 8.5% at 1080p, it's not running the same tests anymore. And by the time he uses the Kaby Lake article for his comparisons, all systems are now running Win 10 (or Linux for specific tests) and a completely different suite of games from the first bench.

All that having been said, I'm not discounting the idea that low res results aren't fully representative of total performance. It's already well known that different types of games rely more heavily on different resources be it CPU cores or memory bandwidth or latency etc. Testing CS:GO at low res is essentially a clockspeed indicator, which makes sense when you're surpassing 500fps. Most of the benches of the 1080ti show a much smaller divide between Ryzen and the 7700k than with the 1080. Unfortunately, as with Computer Base the testing methodologies are not uniform so it's too early to draw conclusions. But as far as that video goes, my initial impression was that the results are simply tracking with Windows multithreading improvements from 7 to 8 to 10.
 
Status
Not open for further replies.