News AMD Announces Threadripper HEDT and Pro 7000-WX Series Processors: 96 cores and 192 threads for Desktops and Workstations

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

bit_user

Polypheme
Ambassador
was just wondering if for a life span product would there be any merit ( even if crazy ) to building a thread ripper system and siting on it for 10 plus years and just upgrade the gpu to suit your needs over the years
I keep machines that long, occasionally upgrading the SSDs and GPU. Not for gaming, though. Buy good-quality components, don't overclock, and use a UPS with active line filtering.

Anyway, as core counts in the mainstream continue to increase, I do expect games will continue to harness them. However, I still think games tend to like clock speed too much to disregard that aspect. For a gaming PC, you pretty much can't beat the current range of desktops, as they're setup to milk clockspeed as much as possible.

BTW, one issue you'll tend to face with a 10-year-old PC is software support. Windows 11 doesn't officially support Sandybridge or Ivy Bridge, although you can work around that, for now. Linux isn't a perfect alternative, since the code paths to support older, less-used hardware tend to accumulate bugs that don't get noticed or fixed for a while. For maximum stability, I'd say try to buy hardware at least 9 months after it's introduced and don't keep it for more than about 5 years.

Also, gaming with a 5-year-old CPU is probably a much more viable proposition than a 10-year-old one.
 
  • Like
Reactions: thestryker

PCWarrior

Distinguished
May 20, 2013
203
83
18,670
3990X7980X
Cores/Threads64/12864/128
TDP280W350W
Base Freq.2.9GHz3.2GHz
Max Freq.4.3GHz5.1GHz
L3 Cache256MB256MB
MicroarchitectureZen2Zen4
Process NodeTSMC N7TSMC N5
Year20202023
Price$3990$4999
% Increase in price
--​
+25.3%


3970X7970X
Cores/Threads32/6432/64
TDP280W350W
Base Freq.3.7GHz4.0GHz
Max Freq.4.5GHz5.3GHz
L3 Cache128MB128MB
MicroarchitectureZen2Zen4
Process NodeTSMC N7TSMC N5
Year20202023
Price$1999$2499
% Increase in price
--​
+25%


IPC from Zen 2 to Zen 3 was 19% and from Zen 3 to Zen 4 was 13%. Combined you get 34.5%. In all-core workloads you will also get a 10% frequency bump while in single/light thread you will get 17-19% bump. So overall you get 48-59% higher performance on average versus a 3-year old Threadripper for 25% higher price and 25% higher power consumption. For only 23% better performance per dollar in 3 years while consuming 25% more power no one should be excited. If that was an Intel release Intel would get bashed. But as it is AMD it gets a free pass. Not by me.
 
Last edited:

ilukey77

Reputable
Jan 30, 2021
779
326
5,290
I keep machines that long, occasionally upgrading the SSDs and GPU. Not for gaming, though. Buy good-quality components, don't overclock, and use a UPS with active line filtering.

Anyway, as core counts in the mainstream continue to increase, I do expect games will continue to harness them. However, I still think games tend to like clock speed too much to disregard that aspect. For a gaming PC, you pretty much can't beat the current range of desktops, as they're setup to milk clockspeed as much as possible.

BTW, one issue you'll tend to face with a 10-year-old PC is software support. Windows 11 doesn't officially support Sandybridge or Ivy Bridge, although you can work around that, for now. Linux isn't a perfect alternative, since the code paths to support older, less-used hardware tend to accumulate bugs that don't get noticed or fixed for a while. For maximum stability, I'd say try to buy hardware at least 9 months after it's introduced and don't keep it for more than about 5 years.

Also, gaming with a 5-year-old CPU is probably a much more viable proposition than a 10-year-old one.
nah was just wondering not sure i could warrant 5k to 10kusd let alone afford that for just a single CPU was just wondering if there was any merit to using thread ripper as a PC with gpu upgrades !!
 
3990X7980X
Cores/Threads64/12864/128
TDP280W350W
Base Freq.2.9GHz3.2GHz
Max Freq.4.3GHz5.1GHz
L3 Cache256MB256MB
MicroarchitectureZen2Zen4
Process NodeTSMC N7TSMC N5
Year20202023
Price$3990$4999
% Increase in price
--​
+25.3%


3970X7970X
Cores/Threads32/6432/64
TDP280W350W
Base Freq.3.7GHz4.0GHz
Max Freq.4.5GHz5.3GHz
L3 Cache128MB128MB
MicroarchitectureZen2Zen4
Process NodeTSMC N7TSMC N5
Year20202023
Price$1999$2499
% Increase in price
--​
+25%


IPC from Zen 2 to Zen 3 was 19% and from Zen 3 to Zen 4 was 13%. Combined you get 34.5%. In all-core workloads you will also get a 10% frequency bump while in single/light thread you will get 17-19% bump. So overall you get 38-41% higher performance on average versus a 3-year old Threadripper for 25% higher price and 25% higher power consumption. For only 12% better performance per dollar in 3 years while consuming 25% more power no one should be excited. If that was an Intel release Intel would get bashed. But as it is AMD it gets a free pass. Not by me.

its a niche market and AMD has the winning hand. reaping the rewards as they say. and this is not like nvidia 4060ti. they are selling and making money with this platform. Or they would have cancelled the threadripper like the rumours suggested before.
 
  • Like
Reactions: bit_user

PCWarrior

Distinguished
May 20, 2013
203
83
18,670
its a niche market and AMD has the winning hand. reaping the rewards as they say. and this is not like nvidia 4060ti. they are selling and making money with this platform. Or they would have cancelled the threadripper like the rumours suggested before.
When Intel did the same with Xeons and HEDT cpus (Haswell-e, Broadwell-e) they were called greedy and accused for stacking the consumer platform on 4 cores. Intel’s rationale back then was too that if you need more cores you are using your cpu professionally so you can afford to pay more so that they have higher margins. Why the double standard for AMD?
 
IPC from Zen 2 to Zen 3 was 19% and from Zen 3 to Zen 4 was 13%. Combined you get 34.5%. In all-core workloads you will also get a 10% frequency bump while in single/light thread you will get 17-19% bump. So overall you get 48-59% higher performance on average versus a 3-year old Threadripper for 25% higher price and 25% higher power consumption. For only 23% better performance per dollar in 3 years while consuming 25% more power no one should be excited. If that was an Intel release Intel would get bashed. But as it is AMD it gets a free pass. Not by me.
The new ones also have much better connectivity despite having fewer available PCIe lanes and the extra bandwidth from DDR5 means they shouldn't be bandwidth starved in 64 core form (over 60% more bandwidth, but it remains to be seen how well the memory controller clocks so could potentially be even more). I'd be more upset by AMD's pricing if it didn't undercut Intel so much as SPR Xeon W in 4ch config maxes out at 24 cores and that CPU costs $2200 tray ($2130 at Newegg right now) versus AMD's $1500 for Zen 4 24 core. Intel also doesn't have anything that compares to the 32/64 core models so AMD is unfortunately free to price those ones however they want to. 32/36 and 56 core SPR Xeon W are only 8ch so they are at cost parity with Zen 4 TR Pro which is probably why the price increased more in 32/64 core than the 24 core.

I don't think people should be happy with the increased pricing, but the pricing is unfortunately sensible for the market that exists. I still find the overall price of entry to be the biggest problem given that the least expensive TR is still $1500. This, in my opinion, is why HEDT as it used to exist is just dead.
 

bit_user

Polypheme
Ambassador
3990X7980X
MicroarchitectureZen2Zen4
Process NodeTSMC N7TSMC N5
Year20202023
Why'd you skip the Zen3-based 5000-series?

overall you get 48-59% higher performance on average versus a 3-year old Threadripper for 25% higher price and 25% higher power consumption.
Also, DDR5, PCIe 5.0, and AVX-512. Bigger memory capacity, as well.

The other issue I see is that your performance math assumes perfect scaling. Let's look at the reality.
W69qqfVwFAEXz2pM9ChGX4.jpg

Note that, unlike your projections, this is comparing 5000 to 7000 (i.e. one generation, instead of two). We'll have to wait a month, for independent benchmarks to be conducted, as this is merely the announcement. Availability doesn't happen until Nov. 21st.

For only 23% better performance per dollar in 3 years while consuming 25% more power no one should be excited. If that was an Intel release Intel would get bashed. But as it is AMD it gets a free pass. Not by me.
So, why don't you go ahead and give us the same comparison between the Ice Lake Xeon W3300 and Sapphire Rapids' Xeon W3400 series? Let's see how Intel has done on cost and power, in the workstation segment!

At the end of the day, AMD just needs to be competitive with Intel. If they've done that, then people will buy these new Threadrippers. If not, they won't. Simple as that.
 

bit_user

Polypheme
Ambassador
was just wondering if there was any merit to using thread ripper as a PC with gpu upgrades !!
If you need the core-count or the I/O, then yes. Otherwise, I think it doesn't represent a very good value.

I think you'd know, if you needed the core count (i.e. is your existing CPU frequently maxed on all cores?). I've heard of some software developers getting ThreadRippers, if they're doing frequent rebuilds of huge software packages. However, most developers probably spend the bulk of their time waiting for incremental builds, where you're better served by the combination of a modest number of cores and higher single-core performance.
 

bit_user

Polypheme
Ambassador
When Intel did the same with Xeons and HEDT cpus (Haswell-e, Broadwell-e) they were called greedy ...
Why the double standard for AMD?
Because we've already been desensitized by Intel!
🤣

Seriously, I will criticize them both for platform and entry-level costs. I've long-since been priced out of the workstation market, however. So, my sense of disenfranchisement is no longer fresh. Now, I guess I'm just jaded and resigned to my fate as a lowly desktop user. Given how far the desktop platform has come in the past 5 years, I'm now a lot more okay with that.
 
Last edited:
  • Like
Reactions: thestryker
If you need the core-count or the I/O, then yes. Otherwise, I think it doesn't represent a very good value.

I think you'd know, if you needed the core count (i.e. is your existing CPU frequently maxed on all cores?). I've heard of some software developers getting ThreadRippers, if they're doing frequent rebuilds of huge software packages. However, most developers probably spend the bulk of their time waiting for incremental builds, where you're better served by the combination of a modest number of cores and higher single-core performance.

I remember AMD introducing FEM FX as a phsyX alternative that uses CPU cores. maybe if that took off it would make use of all the cpu cores in a game engine. but otherwise, not much usage/advantage on utilizing all the cores.

Maybe they can bring in Xilinx's AI cores and HBME to threapripper and make GPUs cheaper. offload the AI/Ray tracing to the CPU cores...
 

bit_user

Polypheme
Ambassador
Maybe they can bring in Xilinx's AI cores and HBME to threapripper
No, the AI cores don't have terribly high performance. They're aimed at the laptop market, where power-efficiency is at a premium.

and make GPUs cheaper. offload the AI/Ray tracing to the CPU cores...
I don't follow. Do you mean they would eliminate the hardware ray tracing from their GPUs, in favor of doing it on the CPU? The whole point of putting it in hardware was to make it fast enough to do in real-time.
 
  • Like
Reactions: P.Amini
I don't follow. Do you mean they would eliminate the hardware ray tracing from their GPUs, in favor of doing it on the CPU? The whole point of putting it in hardware was to make it fast enough to do in real-time.

No, i meant with AI cores. Although it wont be specialized RT cores, it might have enough grunt to do maybe do some upscaling work and reduce the resource required by GPUs?

Just thinking of how the AI tech could be used...
 

bit_user

Polypheme
Ambassador
No, i meant with AI cores. Although it wont be specialized RT cores, it might have enough grunt to do maybe do some upscaling work and reduce the resource required by GPUs?
For the AI cores to have a meaningful amount of horsepower compared to GPUs, they would have to use somewhere on the order of that same amount of silicon. That would add non-trivial cost and cooling requirements to the CPU, which is why I don't expect it will happen.

Just thinking of how the AI tech could be used...
CPU-integrated AI acceleration will initially be used for better power-efficiency than trying to do light-weight AI inferencing with CPU cores or the iGPU. For anything heavy-weight, you'll still need a dGPU or separate AI accelerator. It's basically analogous to the way iGPUs didn't make dGPUs obsolete.

If you want to see something fundamentally different, check out MI300, where AMD is mixing CPU and GPU cores. However, in that case, it's mostly a GPU/AI processor with some CPU cores mixed in, rather than the other way around.


Intel has announced something similar, which they call Falcon Shores.

Intel has since delayed Falcon Shores until 2025 and announced that it will initially ship with no CPU tiles. So, a big walk-back.

Last, but not least, Nvidia has their GH200 superchip:


That combines both a 72-core ARM CPU and a H100 GPU-like accelerator on the same daughter card. They're separate packages, however.
 
Last edited:

newtechldtech

Notable
Sep 21, 2022
307
115
860
if you mean to say that mac studio is not even a competition - yes, you are probably right.

and if you mean to say that apple got 192gb memory and 800gbps, Both Nvidia and AMD workstation cards can work in multi GPU config to match that.

rtx8000 based workstations might cost a bit more than 192gb mac studio (8k) but the performance will leave the mac in the dust.

The reason why movie studios use non apple workstations for their unreal engine workstations. The age of Mac computers used to design disney movies are over.

You are comparing apple to oranges ...
 

jasonf2

Distinguished
was just wondering if for a life span product would there be any merit ( even if crazy ) to building a thread ripper system and siting on it for 10 plus years and just upgrade the gpu to suit your needs over the years
I did something similar to this with my first gen threadripper. It was a great ideal until Microsoft made it obsolete with Windows 11 security requirements. Also something to keep in mind is that each generation ipc increases. So while I still have a decent core count in the first gen threadripper its overall performance isn't better than mainstream hardware because the baselines have moved up so much. So if you look at it from a ~10 percent per year annual increase in ipc the average core has 46% more processing power after 5 years. When you consider that my top end 16 core threadripper cost $750 new I am ok with that. At these prices I think you may be better off buying high end ryzen or i9 sku every 5 years. You will end up with better performance in most common workloads for less money. These machines are for situations when a heavy lifting computer (with workloads that will actually utilize the cores) is needed where really well paid people are waiting on the results (and you can make an ROI work based on those expenses). Think CAD rendering and social media influencers. While I am sure this thing will play Crysis very nicely today you are going to find that in 10 years it isn't really that great.
 
Status
Not open for further replies.