News AMD Announces 16-Core 32-Thread Ryzen 9 3950X for $749, 4.7 GHz boost, Launches in September

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
While I don't play every modern title I have yet to find a game that doesn't run well at 1080p maxed out on my 4670K, including Rage 2.

As I said some titles will benefit as will multitasking but straight gaming is still heavily IPC and frequency dependent and will be for a few more years.

While its true that there a 4 core is probably the suggested minimum and decent 4-cores will still do well (subjectively, depending on what you're definition of playable and enjoyable is), a higher core count CPU currently is objectively still gonna give better performance than said 4 core. It's especially relevant in the minimums rather than straight averages. Yes its true, above 4 cores start to hit diminishing returns, but we've hit the point in games where for a lot of people, it's already worth it.

One could probably still make dual cores with HT work and say that its better fps/$ and still have a decent FPS, but we're well beyond the point of that being the generally agreed upon sweet spot. Starting with this generation (a big generalization, i know) of software and hardware, unless you want to actually compromise your performance now and for the coming future, a 6 core or above makes more sense.

That said, a 4 core is still a better value for gaming.
 

TJ Hooker

Titan
Ambassador
Actually we're currently at 6 cores being the minimum for gaming as some newer modern titles get bottleneck by the 4 core i5 7600k and getting beaten by the ryzen 1600 which has less clock speed and IPC. So having 8 cores to give a little more headroom for multitasking or streaming is a good thing. In the future we may see 8 core utilization
Sure, 4C4T CPUs may be starting to fall behind, but 4C8T CPUs still do fine. A 7700K typically matches or beats a 2600X in gaming.
 
While the 16 core 32 thread processor is very impressive, I have to admit it is more processor than I need. While I was really thinking the 8 core 16 thread R7 3700X or 3800X would be the way to go to get the best gaming performance and still have enough muscle for content creation, but now I'm sold on the R9 3900X. I thought that latency issues would make the larger core 3900X take a performance penalty in gaming, but that doesn't seem to be the case. Better overall gaming performance than a i9 9900K while outperforming an i9 9920X in productivity is just beyond impressive. As long as independent reviews show the same performance, the R9 3900X will be my next upgrade for sure.
 
  • Like
Reactions: NightHawkRMX

Giroro

Splendid
you always pay a premium when getting above the "mainstream" core count. Look at the price difference between Intels 10-12-14-16 cores they start at $200 premiums then jump to $300
I understand what you're saying, but as I pointed out the 12 core Ryzen 3900X which will hold the performance and core-count crown for 2-3months after launch has nearly the exact same price per core as the 8c 3700x and the 6c 3600x.
$699 still is an increase of the price per core, plus the 3950x will have nearly identical cost to manufacture/ship/etc as 3900x
 
I understand what you're saying, but as I pointed out the 12 core Ryzen 3900X which will hold the performance and core-count crown for 2-3months after launch has nearly the exact same price per core as the 8c 3700x and the 6c 3600x.
$699 still is an increase of the price per core, plus the 3950x will have nearly identical cost to manufacture/ship/etc as 3900x

Lets make this incredibly simple:

AMD
R9 3900X 12 cores 24 threads $500
R9 3950X 16 cores 32 threads $750

Intel
i9 9920X 12 cores 24 threads $1,200
i9 9960X 16 cores 32 threads $1,700
https://www.newegg.com/core-i9-x-series-intel-core-i9-9920x/p/N82E16819117963

The R9 3900X outperforms the i9 9920X and the R9 3950X will probably outperform the i9 9960X, and both AMD processors are well under half the cost of their Intel counterparts. No matter how you look at it these AMD processors are an incredible value, lets not split hairs here.
 
D

Deleted member 14196

Guest
I don’t care anything about games. the 16 core would allow us to run very many virtual machines on one workstation thus saving us lots of money so this is a huge win. Don’t care about games in the least
 
I'm not ok with the 3600 clock speeds when comparing to AMD's own existing 1600 and 2600 parts, you're not getting much other then IPC unless this thing overclocks well. Personally I'll go to AMD's new mid-range during the holiday season and get a 3800 or 3800X. I don't need the 3900X or 3950X at this point.

I am curious about the 3000-3500X line up and what that will bring to the truly budget constrained.
Well Its about time for me to update my system (7 years), and I'm debating between the 3600 & the 3600x I'm not sure the difference is worth the extra green stamps.
 
It’s most definitely before. Disable their pitiful HT and their processors are pretty crappy

I have a friend who works at Apple. I guess they have been telling all their customers that the only way to make their computer truly safe from the hardware flaws on all Intel processors since Sandy Bridge (basically the entire Core microarchitecture ) is to disable hyper-threading. If they don't or rely on the mitigations they do so at their own risk. I guess there are a "few" unhappy customers....
 
I don’t care anything about games. the 16 core would allow us to run very many virtual machines on one workstation thus saving us lots of money so this is a huge win. Don’t care about games in the least

Minus the dual channel memory that will bottleneck VM. Seriously an older TR will probably outperform it in VM due to more memory channels, more memory bandwidth and more total RAM.

It’s most definitely before. Disable their pitiful HT and their processors are pretty crappy

I would never assume.

Also the same HT that AMD adopted? I am sure AMD made some changes but at its base their SMT is the same as Intels.

I have a friend who works at Apple. I guess they have been telling all their customers that the only way to make their computer truly safe from the hardware flaws on all Intel processors since Sandy Bridge (basically the entire Core microarchitecture ) is to disable hyper-threading. If they don't or rely on the mitigations they do so at their own risk. I guess there are a "few" unhappy customers....

Actually the entire Core uArch would go back to the original Core solo and Core duo (just before Core 2 Duo). However HT was brought back first with Nehalem in HEDT and then SB in mainstream.
 
Minus the dual channel memory that will bottleneck VM. Seriously an older TR will probably outperform it in VM due to more memory channels, more memory bandwidth and more total RAM.



I would never assume.

Also the same HT that AMD adopted? I am sure AMD made some changes but at its base their SMT is the same as Intels.



Actually the entire Core uArch would go back to the original Core solo and Core duo (just before Core 2 Duo). However HT was brought back first with Nehalem in HEDT and then SB in mainstream.

I honestly had forgotten about Core solo and Core duo.... When I think of Core I always think of the first generation Sandy Bridge as it was so groundbreaking in its day.
 
Actually the entire Core uArch would go back to the original Core solo and Core duo (just before Core 2 Duo). However HT was brought back first with Nehalem in HEDT and then SB in mainstream.
To be pedantic, the i7 Lynnfield processors on Intel's mainstream LGA 1156 socket had HT (e.g. i7 860). But for sure, Intel have relied on HT for many years now.

I would never assume.

Also the same HT that AMD adopted? I am sure AMD made some changes but at its base their SMT is the same as Intels.
I've been wondering about Zen's exposure to SMT vulnerabilities for a while now. Given Intel's current market dominance in datacenters it's not surprising that the majority of attention has been focused their way. AMD, of course, claimed to be more secure but we can't just take their word for it. My question was: if those same researchers spent equal effort trying to exploit AMD's SMT implementation, would they be successful?

There's a short section on "Hardened Security" In Anandtech's Zen 2 microarchitecture analysis article: https://www.anandtech.com/show/14525/amd-zen-2-microarchitecture-analysis-ryzen-3000-and-epyc-rome/3
Ian Cutress suggests that Zen processors have always run additional security checks which greatly limited their exposure. Zen 2 apparently includes hardware mitigations for the few exploits they were vulnerable to which - according to AMD - do not impact performance.
It does seem that AMD's claims to have designed a much more security conscious SMT implementation have merit. Time will tell whether there are still significant security flaws, but for now Zen and more so Zen 2 seem very secure relative to the competition.
 
  • Like
Reactions: NightHawkRMX
I also wonder will the 3400g beat 9400?
Probably not. It might clock a little higher, but IPC will also be a little lower, being only Zen+. It will probably be pretty close, but will likely fall slightly behind in things like games. The real advantage for the 3400G will be its superior integrated graphics, for those making use of them. If you plan on using a dedicated card though, it doesn't make much sense, since you can already get a Ryzen 2600 for about the same price, with 50% more cores and threads, which should be overclockable to a similar level. Or the 2600X for just a little more, to get those higher clocks out of the box. Or for just a little more still, a 3600 with improved IPC that might perform more like an i7.

Another thing to consider is that until B550 motherbards are released, it might be a bit tricky to find boards guaranteed to ship with a BIOS that supports the new processors out of the box, unless you go with X570. Those boards will undoubtedly be priced higher than B450 and probably many X470 options, so unless you have a first or second gen Ryzen processor on hand, a 3400G build with a dedicated graphics card might end up costing more than a 2600X build, at least until the mid-range boards come out.

Actually we're currently at 6 cores being the minimum for gaming as some newer modern titles get bottleneck by the 4 core i5 7600k and getting beaten by the ryzen 1600 which has less clock speed and IPC. So having 8 cores to give a little more headroom for multitasking or streaming is a good thing. In the future we may see 8 core utilization.
The SMT will likely help as well. Six cores without SMT is arguably a good current minimum for ideal performance in some of the more demanding games, and the current i5s generally perform well in these titles. The Hyperthreaded quad-core i7s also hold up pretty well though. So, a 6-core processor with SMT should similarly provide some headroom for future, more heavily threaded games. I don't suspect many game developers will be utilizing more threads than a 6-core, 12-thread processor can easily handle for a number of years, so that could arguably be considered the current sweet-spot, at least if one isn't streaming or otherwise heavily multitasking while gaming.

I am personally waiting for NVDIMMs to become more common for mainstream as I feel storage is the largest bottleneck on systems today.
Eh, I'm not sure I'd say that. Or at least, the vast majority of current applications can only benefit so much from additional storage performance. Just look at game or application load times, and how little they are affected by moving from a SATA SSD to an NVMe SSD that is theoretically multiple times as fast. You might get around 10% faster performance, or in many cases even less than that. Applications are still widely designed with hard drives in mind, and most will probably be for many years to come. Any frequently accessed data is typically held in memory, so application performance tends to not be affected all that much by storage performance, aside from situations where the files in question are too large to remain in RAM.

Also the video didn't show min FPS. It was Average FPS with the 1% frame time as the dark blue bar.
1% lows are effectively a better way of looking at minimums. They are the result of averaging the lowest 1% of frames, so you don't have a single frame throwing off the numbers from one run to the next, nor do you have frequent stutters getting hidden by the rest of the frames around them, which can provide a better representation of how smooth a game runs.
 
The biggest issue with more cores is software adoption ... While I would say it would be a good workstation CPU I feel that the dual channel memory will be the biggest bottleneck for it as workstation applications not only like multiple cores it also likes a lot of fast memory. Dual channel will not allow this CPU to really stretch its legs.

Some workloads are embarrassingly parallel, and there is software out there which takes advantage of that. Not all highly-parallel workloads are memory-bound, so this is a bit of a straw-man argument, and seems to conflate "would benefit from" with "is sufficiently limited by to see no benefit". AMD's benchmarks with Blender comparing an Intel Core i9 with the Ryzen R9 seem to demonstrate that there is a benefit to having these chips.

Personally, I do a lot of photo editing, which could well benefit from this. The software I use is correctly-parallelised, and will happily use all 8 threads my i7-4790 exposes to it (i.e. 70%+ load on each "core"). From watching YouTube videos, the 8 core Ryzen R7s perform noticeably better than my i7 for this task. It's unlikely that the increased core count is going to be overly-hampered by any memory limitations, as far as the user-experience is concerned (a back-of-the-napkin estimate of dual-channel DDR4 bandwidth suggests it's more than enough for this usage*).

* To be honest, I was surprised at the lack of performance penalty dropping from dual-channel to single-channel memory had on general performance.
 
Wait, the Core solo existed? I knew the core duo and core 2 duo existed, but not the core solo.

Yes. Core Solo was the same as Core Duo just a single core version. Yonah was the core name. They were mobile only CPUs. However Asus made a bracket to adapt those to desktop if people wanted to.

Whats funny is Intel basically used the mobile market to test the Core uArch. They were low power CPUs at lower clock rates but performed very well and used very little power. They eventually moved Core to desktops with Core 2. This is the same tactic they are doing with their 10nm CPUs. Starting off on mobile with low power chips then will ramp up.

To be pedantic, the i7 Lynnfield processors on Intel's mainstream LGA 1156 socket had HT (e.g. i7 860). But for sure, Intel have relied on HT for many years now.


I've been wondering about Zen's exposure to SMT vulnerabilities for a while now. Given Intel's current market dominance in datacenters it's not surprising that the majority of attention has been focused their way. AMD, of course, claimed to be more secure but we can't just take their word for it. My question was: if those same researchers spent equal effort trying to exploit AMD's SMT implementation, would they be successful?

There's a short section on "Hardened Security" In Anandtech's Zen 2 microarchitecture analysis article: https://www.anandtech.com/show/14525/amd-zen-2-microarchitecture-analysis-ryzen-3000-and-epyc-rome/3
Ian Cutress suggests that Zen processors have always run additional security checks which greatly limited their exposure. Zen 2 apparently includes hardware mitigations for the few exploits they were vulnerable to which - according to AMD - do not impact performance.
It does seem that AMD's claims to have designed a much more security conscious SMT implementation have merit. Time will tell whether there are still significant security flaws, but for now Zen and more so Zen 2 seem very secure relative to the competition.

I had forgot about the LGA1156 CPUs. But only because they were short lived. Most people who had Core 2 Quad waited for Sandy Bridge and skipped Lynnfield. I did.

As for the security, most hardware fixes tend to not have the same performance losses software does. I don't doubt that AMD also has vulnerabilities yet to be found and I believe that if they get more market share they will be focused on more and we will find them. I don't fault them for that, it just happens.

Lets think back to K10. When AMD was in a much better server and datacenter position, before Intel was really competitive there, they found the TLB butg. This was a pretty major errata. They had to patch it out with microcode for the first generation ones and it was a big performance hit. But they fixed it in Phenom II.

Some workloads are embarrassingly parallel, and there is software out there which takes advantage of that. Not all highly-parallel workloads are memory-bound, so this is a bit of a straw-man argument, and seems to conflate "would benefit from" with "is sufficiently limited by to see no benefit". AMD's benchmarks with Blender comparing an Intel Core i9 with the Ryzen R9 seem to demonstrate that there is a benefit to having these chips.

Personally, I do a lot of photo editing, which could well benefit from this. The software I use is correctly-parallelised, and will happily use all 8 threads my i7-4790 exposes to it (i.e. 70%+ load on each "core"). From watching YouTube videos, the 8 core Ryzen R7s perform noticeably better than my i7 for this task. It's unlikely that the increased core count is going to be overly-hampered by any memory limitations, as far as the user-experience is concerned (a back-of-the-napkin estimate of dual-channel DDR4 bandwidth suggests it's more than enough for this usage*).

* To be honest, I was surprised at the lack of performance penalty dropping from dual-channel to single-channel memory had on general performance.

It will depend on the program of course. Media intensive ones will benefit from more and faster memory. VMs will very much benefit from more and faster memory. Some are more core bound, some are limited by storage constraints, that's why a lot of workstations have SSDs for cache drives for AutoCAD and such.

I just feel that overall its going to be more of a AMD fan CPU than a gamer or workstation CPU.

And I would hope an 8 core Ryzen beat a 4 core i7 in multitasking.
 
https://www.tomshardware.com/news/amd-ryzen-3950x-vs-intel-i9-9980xe-geekbench,39640.html

And we have a winner!!!

The i9 3950X in early leaked benchmarks matches the i9 9900K in single core while beating even the i9 9980xe (18 cores, $2,000) in multicore benchmarks. I don't see how anyone can't call that a total resounding win for AMD.

No we have an unverified leaked benchmark with no information on the system setup at all. It is akin to a rumor. Take it with a grain of salt.

Its also synthetic benchmark results. I doubt that real world performance would have a 16 core beating an 18 core in a highly threaded optimized program much like Intels 8700K wouldn't beat AMDs 2700X in the same scenario.

The tru performance will be revealed when third parties get the processors to test and we have more than just a screen from an unverified source.