News Ryzen 7000 Threadripper Could Resurrect HEDT, Have PCIe Gen 5

It's not about "resurrecting HEDT", HEDT died for a good reason.

More and more software now supports GPU acceleration (Davinci Resolve, Adobe Premiere, Solidworks, etc).

CPU are simply not very good at doing more than one task at once. GPU on the other hand are designed from the ground up to be parallel monsters, doing dot products and matrix math thousands times faster than CPU.

The more software is optimized for GPU, the more irrelevant all those CPU cores become, because the bottleneck is not determined by the number of CPU cores, but by Amdahl's law.

The more GPU-optimized software has become, the more CPU single-core performance starts to matter.

A lot of those synthetic benchmarks Tomshardware still uses to determine multithreaded performance are completely irrelevant in the real-world. Professional software that needs a lot of compute power, uses GPU acceleration. And in optimized professional software using GPU acceleration, single-threaded CPU performance is far more important.

Intel has seen the writing on the wall, properly optimized software uses GPU compute and has no use for 24 CPU cores, or even 12, this is why Intel is now making GPU.
 
Last edited:
HEDT died for a good reason.
it really didnt.

CPU are simply not very good at doing more than one task at once.
multiple virtual machines.
game server hosting.
higher memory channels.
more pcie lanes. (mainly why I want it back along side memory channels)
etc.
Intel has seen the writing on the wall, properly optimized software uses GPU compute and has no use for 24 CPU cores
some programs do scale to use all your cores. i.e. Maven can be made to use as many cores as u got if u allow it.


If "HEDT" costs practically as much as EPYC/Xeon using practically the same upmarket 1P motherboards, then that would be "resurrecting" HEDT in name only.
if that would be case then HEDT never even died as latest TR Pro exists...justs costs liek 7 grand :|
 
It's not about "resurrecting HEDT", HEDT died for a good reason.

More and more software now supports GPU acceleration (Davince Resolve, Adobe Premiere, Solidworks, etc).

CPU are simply not very good at doing more than one task at once. GPU on the other hand are designed from the ground up to be parallel monsters, doing dot products and matrix math thousands times faster than CPU.

The more software is optimized for GPU, the more irrelevant all those CPU cores become, because the bottleneck is not determined by the number of CPU cores, but by Amdahl's law.

The more GPU optimized software has become, the more the CPU bottleneck is determine by Amdahl's law, and the more the single-core performance starts to matter.

A lot of those synthetic benchmarks Tomshardware still uses to determine multithreaded performance are completely irrelevant in the real-world. Professional software that needs a lot of compute power, use GPU acceleration. And in optimized professional software, single-threaded performance is now more important.

The primary reason I want HEDT back is PCI-e lanes. 20/24 usable lanes isn't enough for me. Currently using all 48 lanes on my X299 platform. But I also want OC support and super-fast clocks and optimized single-threaded performance. I don't really care about having more than 16 cores at this time. Quad-channel memory is nice in certain situations as well, but less important to me as it doesn't really help gaming performance at all.
I need to give Resolve a try. Premiere is still very CPU and cache bound in many workloads.
 
Good news if it is the case... Let's see some prices before though xD

While I'd love to have ODR5 (octo channel), I have to admit it sounds it'll be super expensive without even taking into account the cost of the motherboard and CPU. Still, this should be welcome for peeps that depend on CPU grunt. Even if they still keep it at 64C/128T, it'll be hard to top by Intel.

Regards.
 
I don't have any faith in AMD bringing back HEDT as the 3xxx TR started at $1400. To me this (and Intel's complete lack of competition) is really what axed HEDT, because if you could afford that price of entry you could likely justify TR Pro.

Regarding this leak I find it somewhat interesting that they're talking 4/8ch memory given that Genoa is 12ch. It's not impossible that this is how AMD is scaling things, but I'd assumed the top end would also be 12ch not 8.
 
Intel has seen the writing on the wall, properly optimized software uses GPU compute and has no use for 24 CPU cores, or even 12, this is why Intel is now making GPU.

So, you don't use Adobe software do you?

Many cores and threads means I can have many poorly coded apps open and working at the same time.
 
  • Like
Reactions: bit_user
I just upgraded to a 7950x and snagged a 4090 fe two weeks ago. i'm running 8x8 with a 3090. They really should have added 4 more lanes for quad channel memory and called it a day. Especially with the price of these high end MBs. The memory channels is what's holding it back. I was running Blender sims with flips fluids. You could see the cpu wasn't doing much as you up the resolution of the sim. Anything passed 150 and it's a memory bandwidth issue
 
To me it seems like CPU cores do a much better job of converting video, at least when it comes to picture quality. While GPU conversion is speedy, i find the picture quality is inferior bit for bit compared to CPU transcode (not talking about quicksync).
 
This will only work with decent pricing, HEDT has to be significantly cheaper than Workstation to be even vaguely viable. Given where the 7950X is positioned I would expect it needs to be in the $2k-$2.5k range for the top of the line 64c/128T part to have any chance of being something people will look at. The market will be pretty rarefied at that price point as it is. I like the idea of a potential HEDT future where you could build an enormously powerful rig all in for $5k-$6k, I just not sure AMD can make enough out of it at an "affordable" price to make it worth their while.
 
Presumably, the price will be so high that the audience will be extremely limited. That said, I still wish I had an excuse to purchase and build one. Even "just" 64 Pcie5 lanes would be incredibly expandable with the right motherboard.
 
Because they want more of your money and if you cannot get through that hoop, they don't care.
AMD can't help it, sure they want your money but they also don't have the vast supply others have, and have to make do with the supply they do have.
Even for the 7950x that wastes two ccds they use one good one and one not so good one to be able to make more CPUs that can hit that single core mark, at least on one of the 16 cores...
https://www.techpowerup.com/review/amd-ryzen-9-7950x/26.html
ccd-1t.jpg
 
The primary reason I want HEDT back is PCI-e lanes. 20/24 usable lanes isn't enough for me. Currently using all 48 lanes on my X299 platform. But I also want OC support and super-fast clocks and optimized single-threaded performance. I don't really care about having more than 16 cores at this time. Quad-channel memory is nice in certain situations as well, but less important to me as it doesn't really help gaming performance at all.
I need to give Resolve a try. Premiere is still very CPU and cache bound in many workloads.
Exaxtly! I have a 2920x , the lowest one they make, I don't need the cores, need the lanes i'm using 2 mini sas cards to push my array, 10gbe nic, gpu, and 3 nvme drives in a cache pool running at x4. and I still have lanes available if i need them.
 
  • Like
Reactions: bit_user
Intel also left the market as well after Cascade Lake-X.
Intel left because it had nothing to offer that market segment. Ice Lake had such lackluster single-thread performance that most HEDT users wouldn't have been interested.

As for AMD, I can't really say... but I think the unexpected, pandemic-fueled demand for their Zen 3 chiplets resulted in a decision to steer them towards higher-margin products.

BTW, September 2023 seems a depressingly long wait, given they're basically just scaled-down versions of Genoa. Not as bad as the wait for Threadripper 5000, I guess.
 
CPU are simply not very good at doing more than one task at once. GPU on the other hand are designed from the ground up to be parallel monsters, doing dot products and matrix math thousands times faster than CPU.
LOL. You've got it backwards. CPUs are great at multitasking, compared with GPUs!

A lot of those synthetic benchmarks Tomshardware still uses to determine multithreaded performance are completely irrelevant in the real-world. Professional software that needs a lot of compute power, uses GPU acceleration.
LOL. Please let me know when I can compile code and run VMs on my GPU!

Also, you're completely disregarding memory capacity. We can use 128 GB in desktop platforms, but at the performance penalty of 2 DIMMs per channel. With quad-channel, we can do that penalty-free, or we can go up to 256 GB with a penalty (but it matters less, because the speed difference is more than offset by double the channels).
 
  • Like
Reactions: TJ Hooker
I don't have any faith in AMD bringing back HEDT as the 3xxx TR started at $1400. To me this (and Intel's complete lack of competition) is really what axed HEDT,
Maybe the high pricing & lack of competition aren't unrelated?

Regarding this leak I find it somewhat interesting that they're talking 4/8ch memory given that Genoa is 12ch. It's not impossible that this is how AMD is scaling things, but I'd assumed the top end would also be 12ch not 8.
Maybe AMD wants to keep a bit of market segmentation between their TR Pro line and EPYC, this time around?
 
  • Like
Reactions: thestryker
Maybe the high pricing & lack of competition aren't unrelated?
Oh I'm certain it is related, but that doesn't make it any less scummy or less of a reason for the market dying. AMD chose to make massive margins because they could, and we saw this repeated with TR 5xxx.

I'm not confident that there are enough people left in that traditional HEDT market (read as desktop +25% starting price range) now for Intel/AMD to care, but if there are and the leak dates are right maybe we'll see a price battle between the two in 2023.
 
  • Like
Reactions: bit_user
It's not about "resurrecting HEDT", HEDT died for a good reason.

More and more software now supports GPU acceleration (Davinci Resolve, Adobe Premiere, Solidworks, etc).

CPU are simply not very good at doing more than one task at once. GPU on the other hand are designed from the ground up to be parallel monsters, doing dot products and matrix math thousands times faster than CPU.

The more software is optimized for GPU, the more irrelevant all those CPU cores become, because the bottleneck is not determined by the number of CPU cores, but by Amdahl's law.

The more GPU-optimized software has become, the more CPU single-core performance starts to matter.

A lot of those synthetic benchmarks Tomshardware still uses to determine multithreaded performance are completely irrelevant in the real-world. Professional software that needs a lot of compute power, uses GPU acceleration. And in optimized professional software using GPU acceleration, single-threaded CPU performance is far more important.

Intel has seen the writing on the wall, properly optimized software uses GPU compute and has no use for 24 CPU cores, or even 12, this is why Intel is now making GPU.

It is not possible to use GPU for all calculations, only very limited number of algorithms can be executed on GPU. Because performance and number of cores is limited on cpus compared to gpus, approaches using gpu are preferred although cpu alternative can be better.

With ddr5 we have currently limit of 64gb ram on desktop. Hardware limitation is 128gb when bigger modules will be available. Definitely step backward to a situation before 5 years when 128gb (and then 256gb) was available with first threadripper.
 
  • Like
Reactions: bit_user
With ddr5 we have currently limit of 64gb ram on desktop. Hardware limitation is 128gb when bigger modules will be available.
32 GB dual-ranked DDR5 UDIMMs have been available for a year, now. Alder Lake boards with 4 DIMM slots support 128 GB, but your memory speed drops quite considerably. Raptor Lake improves the situation, but we were and are in a situation where 128 GB is supported with caveats.

Intel's product specifications state the maximum memory capacity of gen-12 and gen-13 CPUs is 128 GB. It's possible they only exposed enough address pins, in which case simply plugging in a bigger DIMM wouldn't work. I'm not sure we'll know until larger DIMMs exist and it can be tried.

With that aside, I fully agree with the idea that it would be nice to have a 4-channel platform.
 
32 GB dual-ranked DDR5 UDIMMs have been available for a year, now. Alder Lake boards with 4 DIMM slots support 128 GB, but your memory speed drops quite considerably. Raptor Lake improves the situation, but we were and are in a situation where 128 GB is supported with caveats.

Intel's product specifications state the maximum memory capacity of gen-12 and gen-13 CPUs is 128 GB. It's possible they only exposed enough address pins, in which case simply plugging in a bigger DIMM wouldn't work. I'm not sure we'll know until larger DIMMs exist and it can be tried.

With that aside, I fully agree with the idea that it would be nice to have a 4-channel platform.

I have two dual-ranked 32GB modules, so far I never heard about success with 128gb ddr5. It seems to me that we are waiting for single-ranked 32gb or dual-ranked 64gb modules.

Amd alternative has same limit 128gb.