AMD Ryzen Threadripper 1950X Review

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Obviously there will be some particular merit to the 1950x it has 6/12 core/threads over the Intel 7900K. I don't see how the 1950x would apply more so to streamers than the 7900x though. The 7900x can deliver better game performance whether marginally or not and the 20 total threads are surely up to the task of streaming, not leaving room for 32 threads to make a difference in this case. In the case of I7-7700k vs 1700-1800x I can see the case as the 8 vs 16 thread count allows better streaming. Going beyond 16 threads likely doesn't yield a difference. I think the 7820K is great for gamers/streamers and if you want to save some money snag an AMD 1700-1800x. Gamers/streamers need not buy $1k processor. I look forward to 1920x and 1900x review. I am curious how the 1900x compares to the very similar 1800x. I sucks that Ryzen is such a poor overclocker. The Intel chip gets criticized for thermal paste which I can agree with but still manages much better overclock results in majority of cases here.
 
This is not a gaming CPU. It's for heavy workloads. And now here is the question ... WHO ON EARTH CAN SEE 2 TO 5 FPS DIFFERENCE ?Because this is the diference in moste games who care if is 87fps or 81 fps , you can't see the diference . At more than 60 fps you can't see any diference.
 


Tell that to those who bought a 144Hz 1080p Freesync or G-Sync monitor. But anyway back on topic, check out Guru3D's new benchmark of Magix Vegas Pro (formerly Sony Vegas). Anyone who uses a Vegas product for video encoding and rendering will be very interested in the results of the 1950x there. They don't have many Intel comparisons because they just started using it. Wow. This is encoding purely off the CPU:

http://www.guru3d.com/articles_pages/amd_ryzen_threadripper_1950x_review,17.html
 


It is relevant because Intel doesn't even need anything higher than 14/28 cores it will beat the crap out of Thread Ripper in content creation. As far as gaming goes we know that it already sucks compared to even 8/16 Intel CPU. Also it is relevant because it shows that AMD didn't bring anything new. Even Xeon 18/36 Haswell-E CPU will beat the crap out of Thread Ripper.
 
If we really want to compare this for games with other cpus, how about run two games at same time, each game run on an individual gtx1080?
 


14/28 is enough for Intel to beat Thread Ripper in content creation, in fact make joke out of it. Extra $$$$ justifies it. It is coming out soon anyway...stay tuned.

 
If you absolutely must game at 120 fps, then you should buy 7700K and overclock it. Otherwise, TR is perfectly capable at gaming at over 60 fps which by no means slow. Content creation time is much more valuable metric for those that need it than gaining few frames over already high frames per second.
 
@Freak777power

Did you read the article? Threadripper is unlocked, and they overclocked it for this very article. Sucks in gaming? Celerons suck in gaming. A4 APUs suck in gaming. Threadripper is competitive with all these top dog chips... that doesn't exactly translate to "sucks at gaming".

Look kid, you're obviously an AMD hater, you obviously don't have a background that allows you to see the merit in a chip like this, and you obviously have about 1/3 of a clue as to what you are talking about. So I'm not going to candy coat it, you don't really know what you are talking about. Go back to i7 land and pay attention to the chips meant to do what you want to do. While you are at it you might want to get to learning more about those.

What you need to take away from this is that core speed is god on a lot of these tests, and the 18 core isn't going to have a lot of clock speed behind it. The boost clocks will help the 18 core on some benchmarks, but Threadripper will be competitive, if not faster, on all the heavily threaded workloads. "But why is this?" you most assuredly aren't asking because you are probably already deep into your ignorance filled response, but I'll say it anyways. The 18 core will be running a MUCH lower clock speed across all of it's cores. When the clock speeds are the same, the many more cores of Threadripper will curb stomp the 7900X in heavily threaded tasks. The 18 core Intel will have 2c/4t more, but will be nearly 1 GHz behind the core clock of the 1950X. Even with the extra cores and improved IPC it may not be able to overcome that deficit. Am I ruling Intel out and saying "yay AMD wins!"? No, I'm not. Not in the slightest. There are plenty of applications where Intel will still be supreme. What I am saying is that it isn't all black and white or cut and dry as you make it seem. Even in gaming the Threadripper has it's moments.

Now, stop being a fanboy noob and try to wrap your head around the reality of the hardware.
 
You should stop using very fast preset for hanbrake. High complexity instrunctions are only really stressed out at slower and very slow settings. Very fast quality is awful and no one uses it professionally. Also, i really hope you guys remembered to turn off the decomb filter.
 
You should stop using very fast preset for hanbrake. High complexity instrunctions are only really stressed out at slower and very slow settings. The quality is also awful and no one uses it professionally. Also, i really hope you guys remembered to turn off the decomb filter.
 
"Mainstream Ryzen models feature a dual-core Precision Boost during lightly-threaded tasks"

Everybody always repeats this phrase, but I don't think--particularly with Ryzen--that it's completely accurate. I'm running a 1600X, and you can clearly see the Precision Boost can happen even at full load: I'll see 4 cores at the 3.7 GHz XFR speed, and 2 at the Precision Boost + XFR speed of 4.1 GHz. Now, in such a case, those two cores don't spend 100% of time at 4.1GHz, but I don't think you can rightly call 6 loaded cores a lightly-threaded workload.
 
BTW, the same thing applies to Intel: I have a Skull Canyon, and if I fire up HWINFO64 and, say, Handbrake, I'll see one core at 3.5GHz, a second at 3.4, and the last two at 3.3. Again, that's kind of the opposite of "lightly threaded".
 


Pretty much this. AMD DID NOT have to scale base clocks back with the number of cores, unlike INTEL.
 
guru3d has a REALLY good review and TR. This processor is bringing Intel's $5000+ performance into consumer realms, this is what it is. It does play games as well, but that is not its primary usage. Xeon performance under $1000...
 
What a ridiculously huge beast! I love it! The 1600 I use now is an absolute multi-tasking monster already. Some server nerds and studio design people I know would love to make a good use of such power. I'm calling them now so they can see benchmark results.
 


Doesn't matter. Handbrake doesn't utilize more than 10 cores and runs fewer additional threads. It's not a good benchmark for 12+ core CPUs. See my link above of Guru3D testing with Vegas Studio with this CPU.
 


No. You clearly don't know what you are talking about. Let's take i9 14/28 CPU which is more than enough to beat the crap out of Thread Ripper. All cores on CPU will run at 3.1GHz without even Turbo 2.0 and Turbo 3.0 in other words you have no slightest idea of what you are talking about. You clearly never owned Xeon BroadWell-E to know how cores run and at what speed.

18/36 won't be 1Ghz behind and it will beat Thread Ripper like there is no tomorrow and it will be faster in gaming. As I said you have no a slightest idea. Yes, Thread Ripper is unlocked but just happens you won't pass 4.0Ghz mark, barely. 18/36 will turbo to 4.4Ghz if i am not mistaken and again all cores will run between 3.1 and 4.0 Ghz just as Thread Ripper.

I repeat again. 14/28 will make joke out of Thread Ripper for a higher price but hey...for premium product comes premium price.

Intel Xeon 18/36 i have is already faster than Thread Ripper in all benchmarks Toms Hardware posted...enough said?

Again, how do i know...i have one.

 
well well... as a *former* Intel hedt guy (through three generations), my opinion is Intel screwed the pooch.
it was bad enough that a sub $300 proc ran toe to toe with my $1k Intel.
X299 is a bad joke upon the marketplace.
the *continued* intransigence with their choice of IHS, with the resulting horrible thermals and power reqs. the three tiers of pcie connectivity and the subsequent confusion in the marketplace. the kabylake-x cpus...why in hell do they exist????
anyhow, as soon as the eight core/16 thread AMD threadripper comes out, i'm on it. i mean 64 pcie lanes under $600...doesn't get any better than that.
 
I'm still trying to understand how a cpu pushing 80+ fps in games "sucks"? In all the Ryzen youtube videos, all the FB posts I've read or commented on, from all the owners I know that use them, the Ryzen systems game just fine with very good framerates and very little to no stuttering. With the microcode releases for bios updates the systems got better and better, the memory issues dissipated and the overall experience became rather standard. Even looking at these benches, the TR systems still gamed fine, with very exceptable framerates. What constitutes "sucking" now?
 


Um, people do game @1080p w/GTX 1080/1080 Ti's when (but not all inclusive) the monitor is 144Hz+, if they play modded titles, have the monitor running in DSR mode(s) > 1080p, budgeting constraints for higher rez monitors OR because they choose to do so for ramped up settings on all titles, etc... Many individual reasons.

For benchmarking and publishing... That's a matter of debate but evidently there's some reasoning they are still including 1080p benches: http://store.steampowered.com/hwsurvey/
 


You get more than 64 PCIe with Intel x299 chipset 44 from CPU and additional from chipset itself.

 


It does not suck but when you see bunch of AMD fan boy losers saying how AMD is beating Intel then as a counter argument i say...well you know what...AMD FPS does suck.

 


It shows that there is really nothing out there to utilize more than 10 cores, even that is over kill.
 


I tested x299 and 6/8/10 cores and i see no problem with power requirements and temperatures. I think it is overblown. You get more PCIe lanes from Intel mobo, 44 from CPU and additional from chipset and also motherboards provides more features and for $100 cheaper than x399 which makes sense since x399 is more complex to make.

 
Status
Not open for further replies.