AMD Ryzen Threadripper 1950X Review

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Are you talking about games where the performance is GPU-dependent vs. CPU-dependent? Or are you actually saying that there are games out there that will struggle when using a GTX 1080, yet the limitation is not because of any CPU limitations (i.e. game code optimization, CPU limitations, etc.), but rather because the GTX 1080 just can't handle it?
 


Games where the GPU performance determines the frame rate. If a game gets fairly close frame rates between an i3/i5/i7/Ryzen/etc, then it's the graphics card that is the limiting factor. The titles you listed are like that (BF1, Shadows of Mordor, Rise of the Tomb Raider & Witcher 3); with BF1 being a slight exception due to being both bound until you hit high end CPUs. There are other titles that are the exact opposite by being CPU bound. Usually they're titles with a bunch of stuff going on in the background like rts games. Another example of CPU bound would be emulators such as Dolphin and PCSX2.
 


It was very refreshing to read your take. Thanks for giving yourself the time to write it.

Now, if I may: the only counter-argument that I'd give you about Tom's take/numbers on TR are due to testing methodology. As usual, the devil lurks in the details. Tom's methodology, although not perfect, covers a lot of important aspects of a CPU that are usually overseen in games. Case in point: they tested in as many configs as possible to see how TR handled itself in game-benchmarks. Given a wider/bigger picture on things, the conclusions are usually better than when you have less information; this last usually narrows the conclusions.

Cheers!
 


Then I suppose the question we have for the official tester is, when they were benchmarking these different systems, was the GTX 1080 pegged at near-100% utilization during these tests? If it was, then yeah, I could maybe see justification for ignoring the test results that were really close "because the GPU was holding the CPUs back from their full potential)...

But...

If the GTX 1080 was not hitting near-100% utilization in these tests; if it was, in fact, only chugging along at 40-50% utilization (which would be expected for a GPU designed to excel at 1440p resolutions & aiming towards the high-level 4K resolutions), then as reviewers from multiple sites (Tom's, Techspot, etc.) have been saying for years, testing games at 1080p resolutions with high-end GPUs is the most effective way to test for CPU limitations..
 


I can't imagine they've ever stated it that way. Games generally aren't a good way to test CPUs unless all you care about is gaming. Games tend to be far more dependent on the GPU than the CPU.

If you explicitly limit yourself to testing with games, then that statement would then be closer to being correct. There'd still be quite a few caveats, conditions, and considerations involved, though.
 
The CineBench 15 results among others are very low compared to other reviews, something is way off...was only single threaded performance tested ? What would be the point in only providing single threaded results on a 32 thread chip ?
 


If you want to test how CPUs perform in games they are.

The higher resolution you go, typically the less the CPU matters.

So actually you want a GPU that ISN'T the bottleneck, that's why you only run the games at 1080p so the games can be running "at max" while the GPU isn't holding them back, and the CPUs can work as hard as possible in them. And games are all inconsistent with each other as well.
 
I included the exception for the special case of CPU performance in games in my initial comment on the matter. The issue I was attempting to point out is that the statement, by itself, isn't accurate.

There are qualifiers, conditions, and restrictions that accompany the statement. The individual in question is basing their argument on the idea that his statement is universally true. It isn't. The idea that the GTX 1080 can indeed be the bottleneck is based on those qualifiers, exceptions, and restrictions.

I'm not saying it was bottlenecking. I'm just saying that it's within the realm of possibility.
 
Where you see Cinebench R15 single or multi-threaded in this review? I tested it only for OpenGL. For compute I used mostly real world applications. Please send me a link or quote to understand, what you mean.

 

That seems like such a specialized market that I think you'd be better served with such benchmarks being conducted by a publication/site focused on electronic musicians. They should have a better idea of exactly what it would make most sense to test.

...not saying I wouldn't mind seeing such benchmarks included here, but I don't know how relevant it would be for your purchasing decisions (unless Paul happens to be a part-time orchestral composer and knows exactly what to test).

For one thing, I'd expect DAWs' cooling should be inaudible. If so, then TR is probably not the CPU for you (at least, not without some kind of huge, custom radiator).


Is it? GPUs would be far better-suited to that, and I recall reading about them already being used to apply impulse responses captured from real-world environments nearly a decade ago.

Assuming most large, complex audio processing is now done on GPUs, then the performance of such workstations comes down to loading patches and generally shuffling around large amounts of data. And perhaps those effects that are still CPU-based don't simply depend on raw convolution and FFT performance.
 

I think it should be great for software developers working on large projects. That's probably one of the largest professional markets and got zero representation in this & the Skylake-X benchmarks.

If their Ryzen reviews merely tried a Linux Kernel build, they might've scooped many other publications in finding a Ryzen bug affecting such workloads.

http://www.phoronix.com/scan.php?page=news_item&px=Ryzen-Compiler-Issues
http://www.phoronix.com/scan.php?page=news_item&px=Ryzen-Test-Stress-Run
http://www.phoronix.com/scan.php?page=news_item&px=Ryzen-Segv-Response


They've given lots of coverage to the Infinity Fabric and other technical details, over the course of the various Ryzen launches. Anandtech has even more of the technical nitty gritty.

Personally, I like reading about the details and seeing how different architectures handle different sorts of workloads. Games are definitely an important part of that, but I agree they're only a couple facets of the whole picture.

In some sense, it should come as no surprise if games favor more conventional architectures, since that's where the market is. Game designers spend a lot of time tuning their software, based on what large numbers of people actually use. And mainstream CPUs tend to be well-adapted to running this software because that's what so many high-end PCs actually run.

The workstation applications tend to be a bit more interesting, since their exotic workloads (but not so much the OpenGL stuff) tend to be less amenable to optimizing for desktops. That's why we still have workstations.
 
What I find as being the most annoying, frustrating, and disgusting practice is that of posting fake reviews or posting comments that are only aimed at praising one company over the other or one product over the other. The amounts of absurd bias that permeate from these comments just pollutes the comments section that could very well be a place for great and constructive criticisms, debates, conversations, and explanations, but instead all we get is garbage that doesn't serve Tom's Hardware beneficially at all.

And it's not just the fanboi's that are spewing this stuff. It's extremely obvious that there are people out there, who are working for either Intel or AMD, directly or indirectly, who's jobs are to purposefully make the comments they do in order to benefit the company they are working for or to be a detriment to the other.

Comments like:

"Thank you for this review. I was seriously considering Threadripper. Looks like the 7700k is still the sensible choice for the price when gaming."

"I just looked at gaming benchmark and stopped reading there because as i thought Intel CPUs are killing Thread Ripper in gaming."


...are just absurd.
 


Another option would be to simply keep the computer in another room or other sound-isolated location, with cables passing through a wall for your various peripherals. Otherwise, even if the cooling were silent, you might still encounter things like coil whine.
 
And to the folks over at Tom's Hardware, it's obvious that Threadripper offers more PCIE lanes than Intel. This, I would argue is one of the most important features of Threadripper. This allows for a much different array of possible configurations than Intel.

I would like to think that a company that has established themselves as being a thorough and fair Technology Journalism, News, Review, etc. company, would not only understand this, which it seems you do, but would review the Threadripper system, in the best possible configuration it has to offer, in order to properly portray the performance that the product brings to the consumer table.

With 64 PCI Lanes, a Threadripper system can utilize 3 GPU's running at FULL PCIEx16. Which is something that competitive Intel systems cannot do. You should really compare a Threadripper system with 3x GPU'slike that, to the best setup you could possibly make with a competitive Intel system and THEN benchmark the two and see what the results are. Even if you took the up and coming 18-core Intel CPU and put it up against the 1950X, the 2 additional CPU cores would offer a miniscule performance advantage compared to the performance advantage that 3X GPU's at 16XPCIE can make over it's competition, which only has enough PCIE lanes to run 2X GPU's at 16x. So, as you can see, the whole point of offering the additional lanes, is so that more performance can be had by utilizing what one can do with them. The point isn't just to compare core vs core performance, hell, we all know that already with Ryzen. Sure, if you want to see what 16 cores can do, that's all well and good, but when you're going to review a new CPU and benchmark it's performance, you should really benchmark the peak performance it allows you to have at your disposal.

And people want to talk about how gaming isn't Threadripper's strength, but with the ability to run 3x GPU's at full 16xPCIE, I beg to differ. I'd love to see those benchmarks vs the best you can do with a competitive Intel based system.

That would be a true review of Threadripper and something I would like to see Tom's Hardware do with this very Threadripper and going forward with every product in the future. Sometimes apples to apples just doesn't paint the whole picture.
 

Yes, of course. Even then, you probably don't want it to be too loud, as sound-proofing is a relative thing and harder to do at some frequencies than others.

And if we're talking about what defines a DAW, then it's probably safe to say that quiet is a core feature. It's surely not practical for a lot of home studios to have a sufficiently sound-proof area that's also well-ventilated, apart from their main recording/mastering space.

Of course, we can speculate, as nerd did about FFTs and Convolutions, but the point is we're just guessing. As I said before, if you want a good assessment of how well-suited TR is to DAWs, it's best to consult someone who really knows DAWs.
 

You could be describing yourself, as far as I'm concerned.


How are those absurd? Did you actually look at the gaming benchmarks? Except for AotS, i7-7700K consistently sits at the top (although it's barely edged out by the i9-7900X in a couple).

As others have pointed out, TR was not made to be a gaming chip. If someone wants the best CPU for single-GPU gaming, there's no question i7-7700K is it. There are good reasons to buy TR, but that's not one of them.

Anyone who's followed me for any amount of time should be able to vouch for the fact that I'm neither a fanboy nor a schill. I'm all in favor of AMD succeeding, but we need to stay clear-eyed about the facts. What's the point of having benchmarks, if people are just going to ignore them?
 

Benchmark at what? There simply aren't many applications that can use 3 GPUs and would significantly benefit from full x16 bandwidth. Maybe deep learning and a few HPC workloads, but that's about it.

And if you really want fast multi-GPU communication, you can't do better than a DGX-1.
 
Great looking results! :) I`m a content creator (3D Artist), so rendering is important to me , and that results are fantastic !
Don`t have problem at all with a few FPS lower on some games . However I’m worried about those "2D&3D Workstation Performance" results , i can`t understand their use really .
I`m working with Maya mostly , and i see in this results significant FPS drop in viewport ....as well as in any other 3d program, which worries me, since i`m using like 5-7 more programs. However i can`t really see why those tests are done in the first place , as far as i know 3D Viewports ( no meter if its OpenGL or DirectX) is hugely impacted by the GPU, not CPU. I did a little test right now in Maya , and i opened a empty scene so my FPS is 130 and my CPU use was like 20% while constantly rotating the view, after that i created a sphere and divided it a few times until it was 12 million tris, and again i was with 130 FPS and CPU usage 20% . All the resources used comes primary from the GPU. So i was wondering what this test actually mean? Since i think to buy Threadripper 1950x and I don`t want to end up screwed with bad viewport experience in all of my programs. And I don`t see how those tests are relevant to the CPU testing , since I think most of the software there uses GPU for real-time preview and rendering ? That excludes Zbrush , which is heavily dependent on the CPU and actually I think will benefit from so much cores…. significantly , so that test could be much more relevant . How is this SPECviewpref 12.1 test really done? What it means? Is it possible to be a optimization problem from Autodesk side of things, which will be done soon …..or it’s a problem of the SPECviewpref 12.1 test, maybe not optimized as well? If someone could elaborate on the topic would be great ?
Don`t want to buy such a expensive CPU for work and end up with bad experience in most of the time working on something , since mostly I spend like 90% modeling, texturing, lighting and etc. and like 10% rendering .
Cheers
 

The differences in the results justifies the benchmark. If it was entirely GPU-limited then all the results should be about the same and it wouldn't be a very interesting or useful test.


I'm just guessing, but try some animation. Can you do anything that changes how the sphere is subdivided, over time? Maybe animate some complex deformations. Now, see if your FPS drops or your CPU load increases.

In the simple case, the CPU doesn't have much work to do. A couple threads are probably busy-waiting on the GPU and just copying the same geometry over to it. However, once you add some complex animations, I expect you'll see the FPS take a hit. If that happens and CPU stays at 20% (do you have a 10-core CPU?), then the program probably uses only a couple CPU threads and it's known that Ryzen's single-thread performance is a bit weaker than the competition - especially for AVX-heavy workloads, which Maya likely uses. As for why TR is the worst of the Ryzens on Maya 2013 Viewset, perhaps that's because the data gets placed in a pool of memory connected to a different die than where it's mostly being accessed.


That's a good question. You can download & run it, yourself. Then, you'll hopefully see what it's actually measuring and can decide whether it's relevant to your purposes:

https://www.spec.org/gwpg/gpc.static/vp12.1info.html


I'd suggest asking on their customer support forums (if they have any) whether they have any Ryzen/TR-specific optimizations planned and what's the ETA. If so, perhaps someone on there will be able to point you to updated benchmarks, once any such patch is released.
 
"Where you see Cinebench R15 single or multi-threaded in this review? I tested it only for OpenGL. For compute I used mostly real world applications. Please send me a link or quote to understand, what you mean."

My understanding is Cinebench OpenGL is a video card test and is separate from the single core and multi-core CPU test...why run the Open GL at all in a CPU review ?
 
so what now , i don't understand the review , shall i buy AMD or Intel ???? i have i5 2500K @ 4.6Ghz waiting to be replaced sometime soon .
 
peterf28 (and others) NO comments have been deleted in this thread other than one spam post, and one deleted by a user (their own post). Moderators can view active and deleted posts, and there is nothing nefarious going on with the benchies either.
 

Look at the results - CPU is clearly a factor.

People who do 3D animation need good interactive performance, for which the CPU can be a bottleneck in complex animations & scenes. In some ways, I think it can be more demanding than gaming, since the models that artists are working on might contain far more geometry. By the time they reach gamers, the models have been optimized for the geometry performance even of lower-end machines.
 
Status
Not open for further replies.