News AMD Unveils Three Ryzen 7000X3D V-Cache Chips, Three New 65W Non-X CPUs, Too

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Just stop, no one looking for a value option is going to shell out $300+ on "CL30" DDR5 for a minuscule gain, when they can just keep using their DDR4 with Intel instead.

If you are going to spend an extra $300 on hardware as a value customer, it better be something worthwhile like a better GPU that doubles your framerate. Instead of wasting $300 on top of the line DDR5 for 2 fps more.

AMD made a really dumb decision by forcing people to buy DDR5.
When did they force anyone? When was someone threatened to upgrade or else? When did DDR4 become obsolete and impossible to find?
 

headloser

Distinguished
May 31, 2010
14
0
18,510
Well, goody for the games but it only good for GAMES? What were they thinking of. I am hoping they would release a GPU Built-in Ryzen CPU. I using Ryzen 5600G. The built-in GPU does a pretty good job for now. And with the price of GPU WAY WAY beyond logic. I gong wait for the nex Generation of AMD GPU
 

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
762
1,760
When did they force anyone? When was someone threatened to upgrade or else? When did DDR4 become obsolete and impossible to find?

You're forced to use DDR5 to use these new AMD chips.

If you want to be an obsessive pedant about it, no, you're not "physically forced" to buy DDR5 at gunpoint. You can also buy these CPU and put them on a bookshelf just to look at.

Fact is, Intel has a good value propostion with their new i3 + DDR4. AMD has no value proposition.
 
Last edited:
  • Like
Reactions: KyaraM
You're forced to use DDR5 to use these new AMD chips.

If you want to be an obsessive pedant about it, no, you're not "physically forced" to buy DDR5 at gunpoint. You can also buy these CPU and put them on a bookshelf just to look at.

Fact is, Intel has a good value propostion with their new i3 + DDR4. AMD has no value proposition.
Well then I guess we're lucky to have Intel.
 
You completely missed the point I'm making: you know the P cores are always better than the E cores and can avoid using the E-cores entirely and easily. This is very much not the case for AMD's design here you'll have to know which workloads prefer which and there's no way the scheduler is going to know which one is always better for the use case.
But Windows and Linux know how to. AMD also has build into the schedulers (and power plans?) which CCD tasks will be assigned to first. What you're saying didn't make sense to me on neither side of the argument.

Regards.
 
You're forced to use DDR5 to use these new AMD chips.

If you want to be an obsessive pedant about it, no, you're not "physically forced" to buy DDR5 at gunpoint. You can also buy these CPU and put them on a bookshelf just to look at.

Fact is, Intel has a good value propostion with their new i3 + DDR4. AMD has no value proposition.
AMD hasn't phased out AM4 though... In fact, the Ryzen 5600 is the best bang for buck currently.

Regards.
 
  • Like
Reactions: hannibal
X3D model at 120W? Damn. No value lost on my 7700X since I locked it at 105W to keep it under 80ºC at all times
Value was lost with these CPUs because they will thermal throttle anyway forcing you to either keep it at 95ºC(+ the added fan noise) or locking it at a lower power consumption.
In real world games the 5950X probably does not lose by much at higher resolutions and with 2k, 4k and those weird ultrawides becoming more and more common the weight on the CPU is a lot less. It also does not required all the top end gear that does not do much at all for gaming.
I have the 5800X3D which is also 105w. While gaming it only gets into the high 60’s, even in the peak of summer with no a/c it was only just getting to low 70’s, this is using a 360mm AIO. When we look at Intel with >200w cpu’s still running ok it’s not as simple as just looking at the wattage. We will have to wait for reviews to know how hot these CPU’s run. Looking at the rest of the specs these should be a massive step up over the 5800X3D in performance so even if they run in the low 80’s I’d be happy if the reviews back up the expectations.

As to if it’s needed well that depends. I’m using a 1440p 240Hz monitor and in AAA FPS games try to get the best fps possible. MW2 I average about 200-210 fps, I have tested dropping down to 1080p and the fps change was negligible so I am fairly sure I am at the best my cpu can push out. I am not planning to upgrade, but I can see why someone buying a cpu for gaming in the near future might decide getting the best is what they want/need.
 
But Windows and Linux know how to. AMD also has build into the schedulers (and power plans?) which CCD tasks will be assigned to first. What you're saying didn't make sense to me on neither side of the argument.

Regards.
How is the OS or the scheduler supposed to know which game will run better with 3d cache or with higher clocks?
Any automated process will be pretty much like, if game go to 3d cache if not game go to faster cores (or use all cores) .
Unless AMD made a list of game .exe names that run better on 3dcache and is sending only those to the 3dcache ccd.
 
How is the OS or the scheduler supposed to know which game will run better with 3d cache or with higher clocks?
Any automated process will be pretty much like, if game go to 3d cache if not game go to faster cores (or use all cores) .
Unless AMD made a list of game .exe names that run better on 3dcache and is sending only those to the 3dcache ccd.
Ask that to Intel :LOL:

But in all seriousness, there's something call "telemetry" which has been in use for CPUs since HyperThreading (or SMT) was created. It's the same thing Intel had to solve with P and E cores when you have software running and they said "ah, but if your game is running in the P cores, we'll assign everything else to the E cores!". So yeah. AMD knows how to assign the correct CCD for whatever tasks they want as long as the OS uses the CPU telemetry.

Worst case, you can always do core-parking anyway.

Regards.
 
5600 is no longer available pretty much anywhere besides the US.

Besides, the i5 13400f is cheaper and much faster.
The 5600 is plenty available outside the USA... I don't know what you're talking about XD

Also, the 13400F just released, so I've yet to see benchmarks of it and how it stacks in the "value" range. That being said, I don't think you're wrong, but saying AMD has no value alternative because AM5 is DDR5 only is... Well... Incorrect?

Regards.
 

hannibal

Distinguished
The 5800X3D isn't value by any measure or metric. You might be able to plonk it on a cheap boards with budget RAM, but that doesn't magically make the £350-400 chip budget, value or remotely an answer to a 12/13400 or any decent i3 option.
5600 then if you need budget option... AMD has no problems of selling their AM4 products as long as DDR5 remains to be expensive.
 
Last edited:

KyaraM

Admirable
Worst case, you can always do core-parking anyway.

Regards.
Love how that is always an argument with AMD ("you can just use eco-mode! You can just use Process Lasso to tie games to the correct CCD!") when stuff like that never is accepted as an argument with Intel ("everything has to run stock or it is not comparable at all!"). It's also amazing how it is constantly ignored that cross-CCD core use has about ten times higher latency than accidentally using an e-core and AMD outright recommends disabling one CCD for best gaming experience since at least Ryzen 5000. Let's just continue the obvious hypocricy and bias, it's okay.

In other news, I'm actually amazed at how stupid AMD is here. Einstein was right once again. What a waste of silicon those two high-end CPUs are going to be...
 
Love how that is always an argument with AMD ("you can just use eco-mode! You can just use Process Lasso to tie games to the correct CCD!") when stuff like that never is accepted as an argument with Intel ("everything has to run stock or it is not comparable at all!"). It's also amazing how it is constantly ignored that cross-CCD core use has about ten times higher latency than accidentally using an e-core and AMD outright recommends disabling one CCD for best gaming experience since at least Ryzen 5000. Let's just continue the obvious hypocricy and bias, it's okay.

In other news, I'm actually amazed at how stupid AMD is here. Einstein was right once again. What a waste of silicon those two high-end CPUs are going to be...
Er... That applied for both Intel and AMD; it was't intended as a solution for AMD exclusively. If the thread scheduler from Intel doesn't work the way you want it to, you can always override it.

The fact you read it that way says more about you :)

Regards.
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
You can just use Process Lasso to tie games to the correct CCD!")
To be fair, we don't know with these new 3D models. Don't compare speculation about unreleased products with commentary about released & reviewed ones.

cross-CCD core use has about ten times higher latency than accidentally using an e-core
Huh? That statement makes no sense. If you accidentally run something on an e-core, it runs slower until it gets migrated again, some milliseconds later. If you access data in a non-local L3 segment, that single access takes longer. Even then, the CPU's out-of-order engine and hyperthreading can potentially find other work to do. Those are two vastly different issues spanning at least 5 orders of magnitude different timescales (tens of nanoseconds vs milliseconds).
 
Last edited:
  • Like
Reactions: froggx

froggx

Distinguished
Sep 6, 2017
89
36
18,570
Shall i wait for 7800x3d, or buy 13400 or 13700 ? I am perplexed now. Help.
You should prolly get the one that's best for what you wanna use it for. Unless you don't want to do that, in which case you shouldn't. Your question is a bit too open for interpretation to get helpful help.

You might have more luck posting in the cpu forums.
 
Last edited:
  • Like
Reactions: bit_user
But Windows and Linux know how to. AMD also has build into the schedulers (and power plans?) which CCD tasks will be assigned to first. What you're saying didn't make sense to me on neither side of the argument.

Regards.
Windows and Linux don't have a clue what application prefers what level of cache and neither can a scheduler. The article literally makes it sound like they're whitelisting, and using the scheduler to make sure the games stay where they're supposed to:

AMD is working with Microsoft on Windows optimizations that will work in tandem with a new AMD chipset driver to identify games that prefer the increased L3 cache capacity and pin them into the CCD with the stacked cache. Other games that prefer higher frequencies more than increased L3 cache will be pinned into the bare CCD. AMD says that the bare chiplet can access the stacked L3 cache in the adjacent chiplet, but this isn’t optimal and will be rare.​

What about non gaming workloads? What are they doing to prevent the wrong CCD accessing the extra cache? They haven't announced that they have any sort of hardware solution built into the chips (like thread director) so I'm assuming they don't have one just because they'd be touting it if they did. There is no easy way, aside from a whitelist, for AMD to dictate what goes where due to the fact that they wouldn't know up front which to choose. In the case of P/E cores starting with the P cores is always the right choice and the problems have been when that doesn't happen. There is no always case for AMD here, so I don't know how you think the two are the same.

edit: I suppose AMD could do something like a machine learning algorithm for cache hits, but I'm pretty sure this would need a hardware solution similar to what nvidia does to enable dlss.
 
Windows and Linux don't have a clue what application prefers what level of cache and neither can a scheduler. The article literally makes it sound like they're whitelisting, and using the scheduler to make sure the games stay where they're supposed to:

AMD is working with Microsoft on Windows optimizations that will work in tandem with a new AMD chipset driver to identify games that prefer the increased L3 cache capacity and pin them into the CCD with the stacked cache. Other games that prefer higher frequencies more than increased L3 cache will be pinned into the bare CCD. AMD says that the bare chiplet can access the stacked L3 cache in the adjacent chiplet, but this isn’t optimal and will be rare.​

What about non gaming workloads? What are they doing to prevent the wrong CCD accessing the extra cache? They haven't announced that they have any sort of hardware solution built into the chips (like thread director) so I'm assuming they don't have one just because they'd be touting it if they did. There is no easy way, aside from a whitelist, for AMD to dictate what goes where due to the fact that they wouldn't know up front which to choose. In the case of P/E cores starting with the P cores is always the right choice and the problems have been when that doesn't happen. There is no always case for AMD here, so I don't know how you think the two are the same.

edit: I suppose AMD could do something like a machine learning algorithm for cache hits, but I'm pretty sure this would need a hardware solution similar to what nvidia does to enable dlss.
As you yourself said, it's not like it's impossible to do or not done before. AMD's way of doing it may not be the exact same as Intel's, but they do have a way to do it. And as I mentioned, even if it doesn't quite work, the penalty from a game running in the non-VCache'd CCD shouldn't be too terrible either?

Regards.
 

bit_user

Titan
Ambassador
They haven't announced that they have any sort of hardware solution built into the chips (like thread director) so I'm assuming they don't have one just because they'd be touting it if they did.
Don't be so sure. All modern CPU cores have built-in performance monitoring instrumentation, and it seems like AMD has been extending it in Zen 4:


It's plausible the kernel could have a background task that snoops these registers on the different cores/threads and tries to decide if it should migrate them.

IMO, this isn't an enviable tradeoff to have to make, but just maybe you can do meaningfully better than a blind guess.

edit: I suppose AMD could do something like a machine learning algorithm for cache hits, but I'm pretty sure this would need a hardware solution similar to what nvidia does to enable dlss.
It needn't be as fancy as machine learning. You could simply rank threads by which are getting the highest L3 cache hit rate.
 
Last edited:
  • Like
Reactions: thestryker

PiranhaTech

Reputable
Mar 20, 2021
136
86
4,660
I wonder if the 7000 non-X3D CPUs were build with the intention of adding X3D later. I like the idea of using a low-wattage CPU for a backup PC
 

PCWarrior

Distinguished
May 20, 2013
216
101
18,770
When I saw that the octacore model (the 7800X3D) had a reduction in the clockspeed while the dual die models did not have a frequency decrease I was sure about what AMD had done. For thermal reasons primarily, when there is V-Cache stacked on top of the die the frequency limit is 5GHz. So they left the other die without V-cache so that the it can hit higher frequencies. And though this sounds good on paper here is the second reading.

When you are gaming AMD says that the gaming load is going to be handled by the die with the V-cache. This means that even the 7950X3D, despite its advertised 5.7GHz boost is only going to run games at most at 5GHz (and that single-threaded). So there is not going to be a gaming performance difference between the 7950X3D and the 7800X3D. The advertised 5.7GHz on the 7950X will only be used in non-gaming workloads where the thread is going to executed on the die without the V-cache. And that assuming that AMD’s implementation of this asymmetric implementation won’t be buggy – which I doubt. Ah and the gaming benchmarks that they showed winning against the 13900K with double digit margins are mostly the few games where AMD was already winning.
 
Status
Not open for further replies.