Review AMD Threadripper 3990X Review: Battle of the Flagships

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
people would not want a centralized all controlling computer, think HAL 9000, in there home. That is as true today as the day he said it.
I dunno, man. As TDPs climb ever higher, we're definitely getting closer.

I mean, who in the 70's would've ever predicted that people 50 years hence would be using water-cooled computers in their homes?

Some people even have racks with a couple rack-mount networking gear, disk arrays, and server chassis. That's getting pretty close.
 
Whos "we" for sure not you lol ... I need the most core one can get for lowest price, I use Handbreak 24/7 on file converting machine non stop ... and I get paid for converting the movie files into HEVC
If you get paid for that, you should look into using a GPU for these conversions. My old RX480 can handle 3 FHD streams at 90 fps when using ffmpeg - much more efficient.
As for Handbrake (yeah, that's actually how you spell it), it's not making proper use of more than 8 CPU cores in HEVC - my 2700X sees only 60% CPU use with it, so you're better off with a i7 9700.
 
I remember back when dual-core processors first came out, all the reviews of them covered multitasking. That's what they're for, after all.

Lately, with core counts higher than ever, it seems like every single CPU reviewer has completely forgotten about the concept of multitasking. Have a piece of software that can't take full advantage of all the cores by itself? Not a problem. Run more software, maybe even another copy of the same software running another workload.

How can anyone review a 64-core processor without once even touching on multitasking? Premiere barely touching the CPU while rendering a video? Well, how does the CPU do if you render the video while playing a game? Compare the results of the game and the results of the render run separately to them run together. This kind of test used to be routine.

You'd think the issue with any one program being able to see all 128 threads in Windows easily would be a flashing neon sign indicating that multitasking tests are required, but apparently not.
 
Lately, with core counts higher than ever, it seems like every single CPU reviewer has completely forgotten about the concept of multitasking. Have a piece of software that can't take full advantage of all the cores by itself? Not a problem. Run more software, maybe even another copy of the same software running another workload.
It's a good point, but benchmarks like this would probably have a lot of variability and would be unlikely to pair up exactly the same set of tasks any given user would need to run.

I'd suggest that it might be wise if benchmarks would simply add a graph that shows overall % CPU utilization. That way, a user can look at the set of tasks they'd expect to pair up, and see whether the CPU should have enough headroom to manage.

It's not perfect, since you can't quantify memory bandwidth utilization like that (to my knowledge), but should still give some idea of how well one could multi-task with a given CPU.


BTW, Phoronix found loads of Linux-based benchmarks that this CPU utterly dominates (granted, the majority of which are still encoding or rendering):



So, it's not as if there aren't plenty of single workloads that would justify it.
 
Last edited:
I dunno, man. As TDPs climb ever higher, we're definitely getting closer.

I mean, who in the 70's would've ever predicted that people 50 years hence would be using water-cooled computers in their homes?

Some people even have racks with a couple rack-mount networking gear, disk arrays, and server chassis. That's getting pretty close.
He wasn't making a technical prediction of what would be possible. He was making a societal observation which still rings true today.
 
I remember back when dual-core processors first came out, all the reviews of them covered multitasking. That's what they're for, after all.

Lately, with core counts higher than ever, it seems like every single CPU reviewer has completely forgotten about the concept of multitasking. Have a piece of software that can't take full advantage of all the cores by itself? Not a problem. Run more software, maybe even another copy of the same software running another workload.
You don't buy a $4000 64 core CPU over a $2000 32 core CPU because you want to multitask better. You choose the 64 core because you have a use case for a production system that would predictably benefit from the extra 32 cores.

How can anyone review a 64-core processor without once even touching on multitasking? Premiere barely touching the CPU while rendering a video? Well, how does the CPU do if you render the video while playing a game? Compare the results of the game and the results of the render run separately to them run together. This kind of test used to be routine.

For the $2000 difference between the 32 core and 64 core threadripper, you could build an entire gaming system with a 9700k, 2080ti, ssd, 16gb RAM, etc, except the monitor.
 
  • Like
Reactions: bit_user
Be careful, you could become a meme one day... remember these:

"Nobody will ever need more than 640k of RAM"

There's no denying that the demand for CPU performance has drastically slowed in recent years. I'm typing this message, for instance, on a 10-year-old Core i5 (Nehalem). Perfectly adequate after I replaced the HDD with an SSD. Back in 1995, you certainly couldn't run Windows 95 on a computer you bought in 1985.
 
If you get paid for that, you should look into using a GPU for these conversions. My old RX480 can handle 3 FHD streams at 90 fps when using ffmpeg - much more efficient.
As for Handbrake (yeah, that's actually how you spell it), it's not making proper use of more than 8 CPU cores in HEVC - my 2700X sees only 60% CPU use with it, so you're better off with a i7 9700.


What i7 9700 ??

and what 2700X ??

try Handbreak with 16 cores Threadripper CPU and see how it destroys your 2700x and i7 9700 .. Handbrake scaels very well with more cores.
 
What i7 9700 ??

and what 2700X ??

try Handbreak with 16 cores Threadripper CPU and see how it destroys your 2700x and i7 9700 .. Handbrake scaels very well with more cores.
In h.264, yes (up to 32 threads can be used efficiently with x264, which is the h.264 codec used by Handbrake) - in HEVC, not so much. It's not Handbrake that is at fault, it's the codec itself that doesn't scale properly.
 
I really hope their engineers are not agreeing with you. I hope they double the cores every other year! Why? Because the more cores they max out, the more cores the lower priced CPUs will have.

Yeah but its the exact opposite.

Chiplet development in core count, clock rate, sophistication (eg something hypothetical that can replace Spectulative execution and out of order), etc. follows consumer market principles, not enterprise/workstation.

That is certainly the case in the short to medium term. If AMD comes out with 10 or 12 core chiplets next generation, it will not be because Amazon Web Services is demanding 100 core chips. it will be because people playing video games on garbage Ryzen 3 based HP laptops purchased from Costco want to be able to play better video games.

This is the 2nd layer of AMD's genius with chiplet architecture.

1) leverage simple, reliable, rock solid design and foundry services from TSMC, et al to make scaleable chips, rather than touchy, finely tuned, complicated, low yield monolithic monsters

2) Tie together both ends of the business: Pure standard consumer tied to the desired growth market - Enterprise.

That way the revenue from sale of lower end components can "pay for" development of the higher end components, since they are fundamentally related in direct ways.

Unlike Intel, whose business in Enterprise is practically distinct from its consumer division, its basically all the same for AMD.
 
  • Like
Reactions: JohnDon9
I have also worked Framestore, ILM, MPC etc & the idea of running windows for vfx on that scale is seriously scary. I think it's fair to say 95%+ of vfx are linux, cause only a few smaller houses run windows, often with horrendous results.
Lol I remember you Jack. You were the comp supe at Cinesite who got super offended when i dropped by to express my concerns with the comp work you did for one of the hero shots on Gods of Egypt, and your main response to my feedback was by threatening to tell on me to the CG supes and other *powerful" people who could potentially get me in trouble "oh so you think the lighting slap looks better than my comp huh? I will make sure to forward that to the supervisors" (while having a grinding smile), along with pulling in your colleagues next to you, to get on your side against me.
 
Last edited:
Lol I remember you Jack. You were the comp supe at Cinesite who got super offended when i dropped by to express my concerns with the comp work you did for one of the hero shots on Gods of Egypt, and your main response to my feedback was by threatening to tell on me to the CG supes and other *powerful" people who could potentially get me in trouble "oh so you think the lighting slap looks better than my comp huh? I will make sure to forward that to the supervisors" (while having a grinding smile), along with pulling in your colleagues next to you, to get on your side against me.
Got 'em!
 
I'm like 97% sure you don't just buy a GPU or CPU by itself to mine. You need power supplies, motherboards, fans, etc. And 300w is not the power usage, it's the TDP. And you're forgetting that the network gets bigger over time so you get diminishing returns. It's simply not worth it.
You are absolutely correct.
Without a motherboard, (ram) and a power supply a gpu and cpu won't work ... I can't believe I have to say this lol.
You can technically get away with not having fans if you build a passively cooled computer. (Obviously not a computer with a Threadripper)

The TDP is close enough to power usage for this scenario.
For example if you go to pcpartpicker and add a Threadripper 3990x as the cpu it shows the TDP as 280 watts.
If you click out of that and go to the main page it shows Estimated Wattage: 280 watts.
I used 300 watts as a conservative guess ... almost a bit too conservative.
Without a video card the wattage comes out to about 389 watts.
https://pcpartpicker.com/list/NB7H7T

Compared to the cost of the $4000 CPU the price for the motherboard, fans and power supply are insignificant, roughly 10% of the price.
This of course changes if you add multiple $1000 graphics cards, but I was only curious in this new CPU's performance.

As I mentioned at the bottom a 3950x would be better CPU to mine with.
Not only is the 3950x cheaper the AM4 motherboard (also cheaper) they are in would be able to use more graphics card in total than a Threadripper's motherboard.
The Threadripper motherboard above supports 4 graphics cards
Compared to the 12 graphics cards you could use in a conservative 4 computers with 3 graphics each all using a 3950x

As for "the network gets bigger over time so you get diminishing returns" that's the whole point of cryptocurrency.
If the network remains small then nobody uses it.
If the network remains small then a 51% attack is more likely
As the network grows and becomes more popular the value of the coin rises to meet demand.
Most cryptocoins do have a difficulty setting that increases with time and the total amount of mining activity (hashes per second) of the network.
Without a "difficult setting" most coins would mine and absurd number of coins as more efficient CPU and GPU are released and or coins with finite supplies like bitcoin would quickly mine all their coins.

Bitcoin has another method that retricts the production of bitcoin
When a bitcoin block is solved the person and or mining group is rewarded with a set amount of bitcoin for performing the calculation.
The reward is a set number that decreases over time until only 21 million bitcoins have been mined.
The decrease happens about every 4 years and is called a halvening which does exactly like it says and cuts the reward in half.
 
No one in the right mind will buy this CPU for mining.....
Go big or go home lol.

From a management point of view it is easier to manage 1 computer versus 4 computers

But I'll admit from a pure profit point of view the Ryzen 3950x (times 3-4) is a better alternative than the Threadripper 3990x.

Having 3-4 Ryzen 3950x systems would have higher cpu performance than a single 3990x Threadripper.

Graphics cards and their prices complicate this but the (Crypto Profit per Dollar spent on computers parts) calculations still favor the Ryzen 3950x

In addition 3-4 Ryzen systems would give you more pci-e x16 slots to work with for the all important graphics cards.

Both Ryzen 3950x and Threadripper 3990x are technically profitable.
 
Yeah but its the exact opposite.

Chiplet development in core count, clock rate, sophistication (eg something hypothetical that can replace Spectulative execution and out of order), etc. follows consumer market principles, not enterprise/workstation.

That is certainly the case in the short to medium term. If AMD comes out with 10 or 12 core chiplets next generation, it will not be because Amazon Web Services is demanding 100 core chips. it will be because people playing video games on garbage Ryzen 3 based HP laptops purchased from Costco want to be able to play better video games.

This is the 2nd layer of AMD's genius with chiplet architecture.

1) leverage simple, reliable, rock solid design and foundry services from TSMC, et al to make scaleable chips, rather than touchy, finely tuned, complicated, low yield monolithic monsters

2) Tie together both ends of the business: Pure standard consumer tied to the desired growth market - Enterprise.

That way the revenue from sale of lower end components can "pay for" development of the higher end components, since they are fundamentally related in direct ways.

Unlike Intel, whose business in Enterprise is practically distinct from its consumer division, its basically all the same for AMD.


You do know AMD is technology company, right? With technology products the product development cycle is slightly different. It's a continuous cycle of:
idea -> market evaluation ->development phase I -> market evaluation -> changes -> ... -> product finished -> package product and prepare marketing... and repeat many of these cycles.

The product development doesn't start with Gamers asking AMD to give them X numbers of cores or X GHz, because they want more FPS. It's the opposite, first is product then market, but very closely aligned.
 
Chiplet development in core count, clock rate, sophistication (eg something hypothetical that can replace Spectulative execution and out of order), etc. follows consumer market principles, not enterprise/workstation.
I'm not sure about that. I don't know the relative volumes, but I think AMD's non-APU Ryzen strategy is driven by cloud/server/HPC markets, with consumers simply serving as the bleeding edge beta testers and helping to drive volume and publicity.

That is certainly the case in the short to medium term. If AMD comes out with 10 or 12 core chiplets next generation, it will not be because Amazon Web Services is demanding 100 core chips. it will be because people playing video games on garbage Ryzen 3 based HP laptops purchased from Costco want to be able to play better video games.
Except that APUs don't use chiplets! They use a custom-built, monolithic die, with iGPU. So, APUs have essentially nothing to do with AMD's whole chiplet strategy.

This is the 2nd layer of AMD's genius with chiplet architecture.

...

2) Tie together both ends of the business: Pure standard consumer tied to the desired growth market - Enterprise.
There's a downside to addressing them with the same products, which is that cloud wants an efficiency-optimized micro-architecture, while consumers want something more throughput-oriented.

By having separate silicon for each market, Intel can potentially (though I don't know the extent that they do) optimize their micro-architecture for each market.

If AMD pulled out the stops and designed dedicated consumer cores, you'd probably see them clocking a bit higher and offering even stronger single-thread performance.
 
  • Like
Reactions: Chung Leong
Go big or go home lol.
Unless you want a server-class workstation for home use, in which case you can now go big and go home!

But I'll admit from a pure profit point of view the Ryzen 3950x (times 3-4) is a better alternative than the Threadripper 3990x.
In terms of perf/$, 3900X is best. At MSRP, at least.

Having 3-4 Ryzen 3950x systems would have higher cpu performance than a single 3990x Threadripper.
In benchmarks that scale well, I've noticed that the 3990X is about 3 times as fast as a 3950X. Given that it has 4 times the cores with only 2 times the memory bandwidth, that shouldn't be too surprising.

In addition 3-4 Ryzen systems would give you more pci-e x16 slots to work with for the all important graphics cards.
If you're doing GPU-based crypto mining, then you're better off with a specialized mobo with a bunch of x1 slots. Crypto doesn't need PCIe bandwidth.

Both Ryzen 3950x and Threadripper 3990x are technically profitable.
...I wonder.

Putting that into a monero calculator with a 300 watt power drain for the system and 0.06 Cost per KWh we get $1,364 profit a year.
As @mcgge1360 pointed out, 300 W is optimistic. Once you add in the overhead of other components and losses from the PSU, you're going to be above that.

And $0.06/kWH is what you pay where? That's the final cost, including delivery charges, taxes, etc.? Last I checked, I'm somewhere north of $0.20/kWH.
 
Last edited:
I work in the VFX industry, where I've been at ILM, DNEG, MPC and Cinesite that work on most of the block buster movies, and I can tell you this. Windows is definitely not the OS of choice, that would be linux.
As you will know, but many won't, these studios used SGI machines, when they were founded (or, soon thereafter). During this time, SGI had its own UNIX-like OS, called IRIX.

So, a lot of the Pro rendering software was written for UNIX, and the studios developed UNIX-centric in-house expertise. For them, I'd imagine the transition to Linux was a fairly natural evolution. And the Linux community is probably indebted to them for the quality of Linux graphics drivers.

I'd imagine it's a somewhat similar situation (though maybe less so) with CAD and other professional visualization software.

Linux gaming only became a "thing", maybe 10 years ago, and I think still isn't big enough to really support the work that Nvidia, AMD, and Intel do on their Linux drivers.

I do however currently work at a smaller vfx studio, and they use Windows.
Does Cinema 4D support Linux, by any chance? Last I checked, I couldn't find a version of their Cinebench for Linux.
 
[/QUOTE]
So, a lot of the Pro rendering software was written for UNIX, and the studios developed UNIX-centric in-house expertise. For them, I'd imagine the transition to Linux was a fairly natural evolution. And the Linux community is probably indebted to them for the quality of Linux graphics drivers.
Big studios don't care about what software they use, it all comes down to where they can save money as well as performance. A window license costs money whereas Linux is for free. Also Linux manages multi threaded tasks a lot better than windows, so you get a lot better render times. These big studios also have a software team to manage and customize linux to their needs thus making it safer and effecient, so with these 3 things combined, I think are the main reasons for why they are sticking to linux.
 
  • Like
Reactions: bit_user