News AMD Announces Ryzen 7000X3D Pricing: $449 to $699 Starting Feb 28th

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
7000 series got much closer to 200w+ than ever before, despite intel having been well above that for years and it was "that's crazy who wants a space heater...."
Anandtech looked at the 7950X & 13900k at different set power levels. What I found interesting is that the 13900k usually needed a TDP double that of the 7950X to get equal performance. Also was interesting is that Zen 4's performance scaling was overall pretty poor going from 105W > 125W > Stock 230W. The 35W > 65W was quite good on the other hand. Intel's performance scaling was very good going to higher and higher TDP. Shows how much more efficient Zen is compared to Core.
 
AMD is bamboozling us yet again via slides. They are learning tricks from Intel.

Here's the deal: for cache heavy apps like games it's wonderful. That's where the slides are correct.

This is what they are not showing you: they have not fixed the speed disparity issue for cache chips. This is why you will NOT see scores from things like Photoshop on a 1 chiplets design.

The 7900X3D and 7950X3D use an assymetric chiplets design. One for high speed low cache and one for caching.

If you compare an all core Photoshop or premiere, or heavy multithread workloads like vray or complex excel sheets you will notice the 7900X3D will be behind a 7900X 10:1. Although the disparity will be much less compared to the 5800X and X3D due to the assymetric design.

The other problem is AMD needs to whitelist apps for best performance on cache cores and yh scheduling kernel level. That just isn't going to fly. There are better ways but the windows kernel just wasn't designed for that. Years later there are still issues with cache thrash on infinity bus when switching between cores for a process.

So a little bit of smoke and mirrors I think. But still a great chip for gamers.
 
Last edited:
Intel is building up many FABs, yes, that much is true.

But they aren't getting into the DRAM / Memory business.

There's a damn good reason why Intel stopped making memory, it's because the profit margins are low on them.

https://www.digitimes.com/news/a20220429VL209/logic-ic-memory-chips.html


Intel has now sold off it's NAND & Optane divisions as well.

Intel shutting down Optane (A huge Mistake IMO, it should be sold off to the big Memory Companies IMO).
https://www.techradar.com/news/intel-is-shutting-down-its-optane-memory-business

That much is also proven by history. So until you can show me that they're willing to enter the DRAM market, they'll be sourcing it from one of the major suppliers.

Be it SK Hynix, Samsung, Micron, etc.

There are far better (Higher Profitability) things to make on those FABs, DRAM isn't one of them.

At the end of the day, it's up to Intel to put the $$$ into making it standard and buying in bulk.

Will Intel risk doing that? Only Pat Gelsinger and the teams inside will know.

But given the BoM cost for HBM, I think it's going to remain a Enterprise & HPC only solution where the costs don't really matter that much and wallets are deeper.

Average Main Stream Consumers are VERY Price sensitive.

They did sell off optane. It was a joint venture with micron iirc. And Intel divested their investment when they saw little benefit in terms of performance boost.

It does hold niche benefits to specialized server markets like search engines/large databases. But it wasn't enough to justify large investment.
 
I already quoted this, you can have persistent and normal dimms at the same time in your system, probably even on the same ram channel.
In order to use App Direct mode the software has to be aware of it and able to use it as App Direct is Application driven. If the application isn't aware of Optane you can only use memory mode. In memory mode you lose the non-volatile nature of Optane. The biggest driver for App Direct was for applications that needed a huge amount of RAM and startup could be measured in hours if the server crashed. The most well known of these applications is the SAP HANA DB which is 100% in RAM. You can see where if your server crashed you would lose all current unsaved datasets, possibly saved had the writes not completed, and depending on the size of the DB, a 256GB RAM DB is the smallest PRD approved on AWS, restarting the DB can take a long time. This all sounds good until you realized that when this was made you needed the super expensive L series CPUs and 128GB Optane was only 1/2 the price of 128GB DRAM. Sure going DRAM would double your costs (not exactly as you still needed the DRAM on Optane) but you wouldn't have the negatives of Optane. Right now the only version of Optane RAM supported for PRD on VMware, no reason to have the DB on a physical appliance, is PMem 100 (the first generation). There isn't any support for Pmem 200 or that for Cooper and Ice Lake. I don't even know if they will support PMem 300 for SPR. On top of that unless you were on HANA 2.0 SP3 (???) or later you didn't have Optane support. At that point you could only use Memory Mode which isn't supported for PRD. At that point Optane is useless.
 
  • Like
Reactions: digitalgriffin
AMD announced the availability and pricing for three new Ryzen 7000 processors with the revolutionary 3D V-Cache tech.

AMD Announces Ryzen 7000X3D Pricing: $449 to $699 Starting Feb 28th : Read more
There is a big inaccuracy in this article. The 7900x3d and 7950x3d each only have one 64 MB SRAM chip equipped CCD so “128MB of performance adding cache” is incorrect. This is why the 7800x3d is limited to 5Ghz whereas the 7900x3d and 7950x3d maintain their non-3D counterparts boost clocks because they have a regular CCD that does not need to be limited by voltage/power/cooling efficiency and will be dynamically switched to for applications and games that benefit more from clock speed vs cache.
 
Yeah, let's ignore all the bashing against Intel architecture going on, the constant calls that Intel is doomed and too stupid to run their business, or how even people with Intel chips criticize the at times extreme power requirements at the top and call everyone a blind Intel fanboy. And the space heater complaint came from the AMD camp, btw, not the Intel camp. Did you spend the last year under a rock or something? The one with the bias here seems to be yourself, and a huge one at that seeing how you completely ignore any and all negative comments about Intel.

Btw, you are just as guilty with moving the goal posts, looking around. For a year I read mockery about how you can't count X because the CPUs have to run at stock - with X being anything from power limiting, undervolting, or overclocking the CPU - for the standard user experience in regards to Intel CPUs whenever it is brought up. But eco mode (basically CPU power limiting easy enough for idiots, since setting a value or two in BIOS is apparently so hard), undervolting etc. are completely fine when it is AMD, still standard user experience somehow! Because there you don't have to change BIOS settings? Pffft. Right. Also, don't even mention Windows programs for that, Intel got them, too, not just AMD. And now looking around in this very topic it's "just use Process Lasso to bind cache-sensitive games to the 3D-cache CCD and clock speed sensitive games to the normal CCD, problem solved!", when the argument was completely rejected ("oh, but I shouldn't have to use a third party tool to bind games to the p-cores exclusively!") whenever it was brought up with Intel's hybrid CPUs over the past year. Inter-CCD latency (which someone, I think Terry, has a nice graph for) is also completely ignored while e-cores are made out to be the biggest issue with gaming ever. But Intel fanboyism is everywhere. LMFAO.


I recommend them stopping to try make Intel users out as idiots as some here tend to do, and dropping the virtue signaling I got from some AMD fanboys ("I'm so much better than you because I don't support Intel, and you are bad for doing so!"). If you don't attack people, they are less likely to shoot back at you, it's as simple as that. I also don't sense a lick of interest in competition from many AMD fans, just schadenfreude about Intel seemingly doing bad. The stupid bashing and uninformed blubbering about how the CPUs work can also go away. And also stop to act as if AMD never does anything wrong. Should help. AMD cares just as little for you as Intel, big corps are all only out for your money and satisfying their investors. AMD has proven recently (and before, but people also don't want to hear that) that they are no better there, and the sooner people understand that, the better.

I very much agree bashing bit and was my whole point in my first post. Intel fanboys are out there yes I didn't deny it but thay are a minority if not a vocal one as are AMD fanboys. Most on Toms being hardware nuetral WAS my point in my first post. I just didn't deny the fanboys have been up in arms as of late was all, they have on both sides. Fanboys suck. I don't get being a slave to a corperation who cares only for your money. IDC which company it is. They are all bottom line and have zero urge to be best friends for ever. I agree more folks need to understand that. Sorry if you took me as attacking Intel directly...anything but. Intel and AMD are both duplicitous and I make no mistake otherwise.

Edit: Also much of what you said has nothing to do with what I've personally said in the past. So I don't get your 'whole' arguement only some of it. I stand by fanboys do no one any good. I've run Intel 18 straight years til very recently on my latest build when AMD finally earned the nod. Both cpu archs from amd/intel have great value it was a tough choice. Idk what else to say.
 
Last edited:
What a strange product line to try and price....
The vcache only really benefits games. The 7800X3D is best poised to take advantage of it, much like the 5800X3D. The higher tier options won't win because of more cores, but only really the additional cache/frequency.
So you have this non-linear performance transition from gaming to productivity where AMD has to hope that there's a lot of high-tier productivity-focused consumers out there buying their own hardware and gaming after hours....pretty niche I bet.
This essentially makes the 7800X3D the cash-cow that has to pay for the majority of the product stack, so you know value will be terrible.
In all fairness though application data processing for most people doesn't make a bit of difference. My thirteen year old son thinks that a few FPS in his game are a really big deal even when the fps grossly outruns the refresh of his monitor though. So having the gaming performance crown is a huge marketing point, even when most people will never own a flagship processor.

We will have to see the new benchmarks but this mid cycle performance jump could put Intel in a bit of a spot though. They have enjoyed a little breathing room with Alder lake and Raptor lake. If the rumor mill is to be believed the tile on interposer setup of Meteor lake (and beyond) is potentially more efficient but not increasing (or perhaps matching) the clocks that Raptor lake is getting. So while this will be great for laptops, a flagship desktop chip based on the new setup may not be marketable until process nodes catch up. As such we may see a Raptorlake+ variant released to counter X3D as a performance/marketing place holder in the near future, but the rumor mill doesn't indicate that Intel has anything game changing in the immediate term related to meteor lake desktop. This highly competitive tit for tat chip release cycle should at the very least keep both companies keeping margins down to compete price wise, since neither has a compelling performance lead.

I admit that all of this is highly speculative but nonetheless the gaming performance crown has alot to do with margins set and marketability across all lower segments.
 
You don't have to imagine, you just have to spend like $900 for a 128Gb dimm module.
Although they might be cheaper by now.
https://www.anandtech.com/show/14180/pricing-of-intels-optane-dc-persistent-memory-modules-leaks
I already quoted this, you can have persistent and normal dimms at the same time in your system, probably even on the same ram channel.
The main problem was the Optane DIMM was a seperated physical DIMM product and Optane required it's own uniquely proprietary Memory Controller support.

I'm talking about "FULL Integration" via JEDEC where the DRAM die sits below the Optane die in the same physical package as one RAM Package with standardized open community protocols & commands.

Where they can literally talk to each other via TSV's (Thru Silicon Via's) and not have to travel from one DIMM (Optane) to the Memory Controller, and back out to the other DIMM (DRAM).

That's very inefficient since the memory signals are taking a round trip measured in many centimeters vs traveling the distance measured in "< single digit millimeters" within the same physical RAM package.

There's a VERY big difference in signal travel distance that affects latency and energy spent. That would affect Read/Write/Copy Bandwidth & latency as well.

Also, open standards & protocols is important, so it's VERY important that JEDEC gets behind this and standardizes this for everybody to use.

Not be Intel Proprietary tech with royalty payments.
 
Yes and I used (plural) because I was mainly answering to the post you quoted, but you where the last post so I quoted you.

It wasn't super clear and I accidently edited out my referance to you plural...(I just saw that my apologies. It was my last post before bed so...lol.) Point being I kind of figured this was your point I just dumbly deleted it I guess. I was just trying to be clear I didn't think it was Intel fanboy central here. Its not AFAICT Toms is one of the better balenced sites forum wise against fanboyism. Some other tech sites are literal breeding grounds. I am glad Toms isn't. And ultimately my point. Fanboys of any flavor help no one but a corperations bottom line, and it certainly doesn't help consumers. Sorry for any confusion.
 
They did sell off optane. It was a joint venture with micron iirc. And Intel divested their investment when they saw little benefit in terms of performance boost.

It does hold niche benefits to specialized server markets like search engines/large databases. But it wasn't enough to justify large investment.
Optane is perfect for Random I/O throughput and needs Ultra Low Latency from storage.

Ergo, it's perfect as a OS Boot Drive where your OS & main applications will sit.

It also MASSIVELY helps HDD's perform like a SATA SSD or nVME SSD when attached to the correct interface.

Just (16 GiB / 32 GiB / 64 GiB / 128 GiB depending on the size of the HDD) built into every HDD would MASSIVELY help the HDD improve performance and latency along with acting like a huge Read/Write buffer and function as a sudden power down memory buffer that can flush all the Read/Write data in flight into Optane.

Optane would also MASSIVELY help Optical Disc Drives as a form of Virtual Optical Disc Changer where the contents of the Optical Disc could be dumped into Optane memory and Read/Writes would be at the speed of Optane.

Imagine how many Full CD's / DVD's / Blu-Ray's could live in 16 GiB / 32 GiB / 64 GiB of Optane.

There are so many uses for Optane, it takes the unimaginative folks at Intel to tank the product, who can't think of using Optane for non-standard methods of performance improvement.
 
I don't expect dumping prices, in particular as I wouldn't be fond of working for $10 a day in a sweat-shop myself neither (and even less so when a handful of guys are in the meantime having so many billions that they don't even know anymore what to spend it on).

If the retail price of the 7950X3D ends up almost 50% more expensive than of the 13900K, I may sit this one out though (for the time being).
 
Optane is perfect for Random I/O throughput and needs Ultra Low Latency from storage.

Ergo, it's perfect as a OS Boot Drive where your OS & main applications will sit.

It also MASSIVELY helps HDD's perform like a SATA SSD or nVME SSD when attached to the correct interface.

Just (16 GiB / 32 GiB / 64 GiB / 128 GiB depending on the size of the HDD) built into every HDD would MASSIVELY help the HDD improve performance and latency along with acting like a huge Read/Write buffer and function as a sudden power down memory buffer that can flush all the Read/Write data in flight into Optane.

Optane would also MASSIVELY help Optical Disc Drives as a form of Virtual Optical Disc Changer where the contents of the Optical Disc could be dumped into Optane memory and Read/Writes would be at the speed of Optane.

Imagine how many Full CD's / DVD's / Blu-Ray's could live in 16 GiB / 32 GiB / 64 GiB of Optane.

There are so many uses for Optane, it takes the unimaginative folks at Intel to tank the product, who can't think of using Optane for non-standard methods of performance improvement.

Hardware sites have run the test of hybrid optane cache/ SSD boot systems. The speed improvements were marginal.

In terms of memory optane was fast IF you were in standard memory mode. When you required non volatile is was barely faster than traditional NAND
 
AMD is bamboozling us yet again via slides. They are learning tricks from Intel.

Here's the deal: for cache heavy apps like games it's wonderful. That's where the slides are correct.

This is what they are not showing you: they have not fixed the speed disparity issue for cache chips. This is why you will NOT see scores from things like Photoshop on a 1 chiplets design.

The 7900X3D and 7950X3D use an assymetric chiplets design. One for high speed low cache and one for caching.

If you compare an all core Photoshop or premiere, or heavy multithread workloads like vray or complex excel sheets you will notice the 7900X3D will be behind a 7900X 10:1. Although the disparity will be much less compared to the 5800X and X3D due to the assymetric design.

The other problem is AMD needs to whitelist apps for best performance on cache cores and yh scheduling kernel level. That just isn't going to fly. There are better ways but the windows kernel just wasn't designed for that. Years later there are still issues with cache thrash on infinity bus when switching between cores for a process.

So a little bit of smoke and mirrors I think. But still a great chip for gamers.

While dual CCD X3D chips have me hopeful I posted earlier in the thread a similar worry. AMD needs to nail scheduling for the dual CCD setup to work. In a perfect world their solution is flawless. BUT if history is an example, they kind of will and kind of won't get scheduling working right. Just look at Intel's big little cores...I expect a similar outcome. It will mostly work but for the best results you'll need to turn off the CCD your not trying to use in gaming or in intels case the little cores...just like now with dual CCD designs funny enough. I think for some time to come with these asymmetric/dual CCD/asymmetric dual CCD designs from both firms will take years to perfect thread scheduling if ever. Though I hope I am wrong as their is a lot of power on tap for these chips if utilized fully.
 
  • Like
Reactions: digitalgriffin
Imagine how many Full CD's / DVD's / Blu-Ray's could live in 16 GiB / 32 GiB / 64 GiB of Optane.
Well considering we know the amount of storage for each of those it isn't hard. CD = 700MB, DVD = 9.4GB, Blu-Ray = 128GB. That means for the 64GB you get 80 CDs, 6 DVDs, or 1/2 Blu-Ray. Cut those numbers is 1/2 for 32GB or 1/4 for 16GB.

It also MASSIVELY helps HDD's perform like a SATA SSD or nVME SSD when attached to the correct interface.
It helps HDDs because of their extremely slow, compared to RAM, random access time. When used with NAND, it will help on writes to NAND which is why it is able to speed up vSAN performance for things like DBs with a good amount of writes even with NVMe storage. However, it will have limited use as a read buffer since NAND already has a very high random read speed.

It's perfect as a OS Boot Drive where your OS & main applications will sit.
Yes it is a very good OS drive, however, it is EXPENSIVE. Currently you do not want to have smaller than a 256GB OS/Application drive and most people recommend 512GB. The Intel 905p at 480GB runs more than $400. That is a lot of money to spend on an OS drive even if it will give you better performance. Migrating it into RAM won't help much either and applications will need to be aware of it.
 
In terms of memory optane was fast IF you were in standard memory mode. When you required non volatile is was barely faster than traditional NAND
VMware's best practice was to have a 1:4 RAM/Optane ratio for best performance. That means if you had 6x 128GB Optane you would get 6x 32GB DIMMs which would act as a high speed cache compared to the slow Optane. Then the OS/Hypervisor would only see the 768GB Optane RAM. Compared to 768GB RAM the Optane was slower by a measurable amount, 5ish %, from benchmarks I saw. The problem then is BOM. 128GB Optane is more expensive than 64GB DIMMs. You were then paying more for 768GB Optane + RAM than just RAM itself. Going to 256GB Optane it wasn't really any cheaper than 128GB DIMMs so once again your total BOM was almost identical except you had the performance loss with Optane.

For App Direct there were a lot of issues because you needed software that was aware of Optane. The speed ups you saw were when you would start something like a 6TB RAM HANA DB. There going with the standard SAN Storage > RAM bootup would take like an hour compared to a couple minutes with Optane. From experience I know that starting a 512GB RAM HANA DB takes about 5-10 minutes on my VMware vSAN storage with all NVMe drives. Does it really matter if I can reduce that from 5-10 minutes to 1 minute or so...not really.
 
Last edited:
  • Like
Reactions: digitalgriffin
Hardware sites have run the test of hybrid optane cache/ SSD boot systems. The speed improvements were marginal.

In terms of memory optane was fast IF you were in standard memory mode. When you required non volatile is was barely faster than traditional NAND
From everything I've seen, it boosted HDD performance to SATA SSD level performance when the data was in Optane.
Granted that's not the most amazing thing in 2023 where we have nVME SSD's, but the performance boost to HDD's is a HUGE percentage compared to where they were at originally.

There's literally "No Point" in using Optane with any existing NAND Flash SSD. The performance gains are largely minimal.

Imagine if your HDD was attached to SAS or nVME, which would allow the full power of Optane help out a slower medium like a HDD.

IMO, Optanes "Raison d'être" is to help Media & Storage that are slower than NAND Flash play catch-up.

It would do that wonderfully for that.
 
Last edited:
Well considering we know the amount of storage for each of those it isn't hard. CD = 700MB, DVD = 9.4GB, Blu-Ray = 128GB. That means for the 64GB you get 80 CDs, 6 DVDs, or 1/2 Blu-Ray. Cut those numbers is 1/2 for 32GB or 1/4 for 16GB.
Those are assuming each Optical Disc of that type is full. Many times, they aren't even remotely full.

But the point is that it would help out with buffering and lowering latency along with massively improving bandwidth once the data is loaded.

It helps HDDs because of their extremely slow, compared to RAM, random access time. When used with NAND, it will help on writes to NAND which is why it is able to speed up vSAN performance for things like DBs with a good amount of writes even with NVMe storage. However, it will have limited use as a read buffer since NAND already has a very high random read speed.
There's literally no point in using Optane with existing NAND flash, the benefits are minimal.
Using it with HDD / ODD / Tape Storage, any media slower than NAND flash should be it's "Raison d'être".

Yes it is a very good OS drive, however, it is EXPENSIVE. Currently you do not want to have smaller than a 256GB OS/Application drive and most people recommend 512GB. The Intel 905p at 480GB runs more than $400. That is a lot of money to spend on an OS drive even if it will give you better performance. Migrating it into RAM won't help much either and applications will need to be aware of it.
Why don't you want smaller than a 256 GB OS Drive?

Let's put the "Game Drive" on a NAND Flash based SSD as a seperate thing along with any storage of large files.

The OS Drive could get by on something far smaller with typical applications that aren't "Gaming" or storage of massive files.
 
Imagine if your HDD was attached to SAS or nVME, which would allow the full power of Optane help out a slower medium like a HDD.
You will still be limited by the Optane's controller and amount of Optane for parallel operations. The 32GB Optane Memory that you could use for caching helped a lot with hot data and only marginally with cold data. For the cache to be most useful on a HDD you would need 64+ GB, probably 256 just to be safe, and the BOM is WAY too much at that point.
 
You will still be limited by the Optane's controller and amount of Optane for parallel operations. The 32GB Optane Memory that you could use for caching helped a lot with hot data and only marginally with cold data. For the cache to be most useful on a HDD you would need 64+ GB, probably 256 just to be safe, and the BOM is WAY too much at that point.
The only reason it's "Way too much" is because Intel couldn't get enough orders to bring manufacturing to proper scale.

If you had a Seagate/WDC all go in on attaching Optane to their HDD's to help with "Hot Data", the cost per Die would dramatically go down to the point that 32 GB to 256 GB of Optane would be viable.

It's far better than the 256 MiB to 1 GiB of DRAM attached to each HDD's micro-controller, especially when you get flooded with massive amounts of data.