News AMD Announces Ryzen 7000X3D Pricing: $449 to $699 Starting Feb 28th

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
But the point is that it would help out with buffering and lowering latency along with massively improving bandwidth once the data is loaded.
Buffering is already done as those devices already have RAM used to buffer. That said in new computers ODD's are basically obsolete unless you are making a HTPC.

There's literally no point in using Optane with existing NAND flash, the benefits are minimal.
Not true when looking at a SAN solution. Using Optane in a VMware vSAN as your write cache can increase your DB write performance by 25%. Reason is the high IO at low QD. Do note that in an all flash VMware vSAN the cache drive is only write cache and all reads are done from the NAND storage layer.

Why don't you want smaller than a 256 GB OS Drive?
The OS and applications are growing all the time. My work laptop has a 256GB NVMe SSD and I have 110GB free and that is only with the applications I use all the time for work. As it ages and more service packs are added the storage will continue to increase. Also due to the cost of drives there is little reason to go with a smaller OS drive anymore. For example the cheapest 256GB NVMe drive on pcpartpicker runs $20 whereas the 512GB runs $29. Why would you limit yourself to only 256GB when you can double your storage for less than a 50% increase in price?
 
The only reason it's "Way too much" is because Intel couldn't get enough orders to bring manufacturing to proper scale.
It isn't just because or orders it is because it is vastly different than NAND and more expensive to make in general. Sure if you have a bunch of companies making it that could drive down the price. However, it is also a niche product that was never properly marketed as Intel wanted to make it into RAM. In reality they should have focused it solely on the write cache drives for Hyperconverged Infrastructure. I know exactly how fast it is as a write cache as I run an all flash VMware vSAN with Optane P4800X cache drives.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,127
647
20,060
It isn't just because or orders it is because it is vastly different than NAND and more expensive to make in general. Sure if you have a bunch of companies making it that could drive down the price. However, it is also a niche product that was never properly marketed as Intel wanted to make it into RAM. In reality they should have focused it solely on the write cache drives for Hyperconverged Infrastructure. I know exactly how fast it is as a write cache as I run an all flash VMware vSAN with Optane P4800X cache drives.
It can be used for many things, "Write Cache" drives is just one of them, along with functioning as Crappy RAM.

Just wondering, have you ever measured Optane against older generations of DDR and compared their performance in Optane DIMM mode?
I wonder how they would compare performance wise.

We could go from DDR3 -> DDR1, even SDR if we needed to compare Optane DIMM's Memory performance =D
 
It can be used for many things, "Write Cache" drives is just one of them, along with functioning as Crappy RAM.

Just wondering, have you ever measured Optane against older generations of DDR and compared their performance in Optane DIMM mode?
I wonder how they would compare performance wise.

We could go from DDR3 -> DDR1, even SDR if we needed to compare Optane DIMM's Memory performance =D
If you were to use Optane only as your RAM the performance of a computer would be horrible. The read/write speeds of PMem 200 is only between DDR-200 & DDR-400. That is performance from 2003 but with a R/W latency measured in microseconds vs nanoseconds. This order of magnitude increase in latency severely affects the user. This is why you need RAM to act as a cache when using Optane in memory mode. It is too slow for modern computers. Do you remember back in the early 2000s when you would get a computer with 256MB RAM on Win XP and all of a sudden it would feel like it is stalling out? If you ever looked at your RAM usage you would see it saying you are using 384MB out of 256MB so you were disk swapping. Even on a computer with NVMe storage disk swapping is painful and you still have RAM. Now remove all that RAM and think how painful that would be now.
 
If you were to use Optane only as your RAM the performance of a computer would be horrible. The read/write speeds of PMem 200 is only between DDR-200 & DDR-400. That is performance from 2003 but with a R/W latency measured in microseconds vs nanoseconds. This order of magnitude increase in latency severely affects the user. This is why you need RAM to act as a cache when using Optane in memory mode. It is too slow for modern computers. Do you remember back in the early 2000s when you would get a computer with 256MB RAM on Win XP and all of a sudden it would feel like it is stalling out? If you ever looked at your RAM usage you would see it saying you are using 384MB out of 256MB so you were disk swapping. Even on a computer with NVMe storage disk swapping is painful and you still have RAM. Now remove all that RAM and think how painful that would be now.
So, since you obviously have experience in servers, is it reasonable to think that intel dropped optane because CPU max is much easier (cheaper) to make and will give similar benefits, or is that nonsense?!
newsroom-max-series-wallpaper.png.rendition.intel.web.1920.1080.png
 

atomicWAR

Glorious
Ambassador
Are the long boot times for memory training still a thing?

It is for me.I've heard some users/motherboards are better but my X670E Taichi still takes its sweet time booting due to memory training regardless of the bios. I run the 1.4 launch bios as it is oddly the most stable rom and I get the best boost speeds on more cores (5.75 GHz on eight cores, 5.7GHz on two cores and 5.675GHz on the remaining six cores with the 1.4 launch bios vs 5.75GHz on two cores, 5.725GHz on six cores, and 5.675GHz on the remaining eight cores with latest 1.11 non-beta bios) but even the latest non beta 1.11 takes forever and a day training memory. So I just run 1.4 for now...waiting for something better to come down the pipe. Plus the 1.11 bios is very unstable for me oddly. It gives the IO controller too much voltage for the DDR5 6000mhz ram in EVO mode in one setting going from auto to redlining it at 1.3v which you can tweak manually to the old settings but crashes were still pretty common despite that. Also, just a me thing, I have to loosen my memory timings from 30-40-40-96 to 30-41-41-96 in both to get everything to play nice and not crash with 64GB or ram.
 
  • Like
Reactions: drivinfast247
So, since you obviously have experience in servers, is it reasonable to think that intel dropped optane because CPU max is much easier (cheaper) to make and will give similar benefits, or is that nonsense?!
newsroom-max-series-wallpaper.png.rendition.intel.web.1920.1080.png
That is a possibility. I also think they know that cache can make a big difference on applications in servers. Microsoft replaced most of their Milan based servers for HPC with Milan-X as soon as Milan-X was released even though the Milan ones were only 1 year old. The reason is the HPC applications were showing a huge increase (greater than generational increases) in performance depending on the application. Now Intel is using HBM which will have higher latency than and lower bandwidth than SRAM but it is much larger. I am interested in seeing benchmarks of the SPR Max.
 
  • Like
Reactions: atomicWAR

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,127
647
20,060
Buffering is already done as those devices already have RAM used to buffer. That said in new computers ODD's are basically obsolete unless you are making a HTPC.
Sadly that is true, but I want to help bring back ODD's back into prominence by helping out Sony's Archival Disc technology and bringing truly Cheap High Capacity Optical Disc Storage.
Also bring back Kenwood's TrueX technology where it could split the Laser beam into Seven beams and read seven tracks simultaneously, that would dramatically lower Read times on Optical Disc.

Not true when looking at a SAN solution. Using Optane in a VMware vSAN as your write cache can increase your DB write performance by 25%. Reason is the high IO at low QD. Do note that in an all flash VMware vSAN the cache drive is only write cache and all reads are done from the NAND storage layer.
Ok, that's a niche Enterprise solution. I'm glad it has uses there too.

But on the main stream consumer side, that's where I'm focusing on finding use cases for Optane.

The OS and applications are growing all the time. My work laptop has a 256GB NVMe SSD and I have 110GB free and that is only with the applications I use all the time for work. As it ages and more service packs are added the storage will continue to increase. Also due to the cost of drives there is little reason to go with a smaller OS drive anymore. For example the cheapest 256GB NVMe drive on pcpartpicker runs $20 whereas the 512GB runs $29. Why would you limit yourself to only 256GB when you can double your storage for less than a 50% increase in price?
Since I'm on the consumer side, I don't find the need for a OS drive to be that large in capacity to be that big of a deal, I would be glad to have 256 GB of Optane as a OS drive and shove any Games & Large Media on a SSD with cold Storage on HDD's. Obviously, the larger the better.

The RAW Random Read/Write performance and consistency of Optane for a OS Drive use case is why I find Optane appealing.
UpCFzd9.png

lqqcX20.png

gyvyQ33.png

XXpohur.png

CwgBuZV.jpg

Imagine how fast your OS woud respond if you had your basic applications located in the Optane die, sharing the same physical packaging as DRAM in the same DIMM.
The travel distance for the signals would be measured in Single Digit millimeters VS being measured in many centimeters or inches by:
- making a round trip from a dedicated Optane DIMM through the memory controller, and back into DRAM.
That's ALOT of unnecessary travel for data to go from a OS drive' storage to RAM.
 
Last edited:
  • Like
Reactions: atomicWAR

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,127
647
20,060
If you were to use Optane only as your RAM the performance of a computer would be horrible. The read/write speeds of PMem 200 is only between DDR-200 & DDR-400. That is performance from 2003 but with a R/W latency measured in microseconds vs nanoseconds. This order of magnitude increase in latency severely affects the user. This is why you need RAM to act as a cache when using Optane in memory mode. It is too slow for modern computers. Do you remember back in the early 2000s when you would get a computer with 256MB RAM on Win XP and all of a sudden it would feel like it is stalling out? If you ever looked at your RAM usage you would see it saying you are using 384MB out of 256MB so you were disk swapping. Even on a computer with NVMe storage disk swapping is painful and you still have RAM. Now remove all that RAM and think how painful that would be now.
So basically, it's best to treat Optane as "The Fastest Storage" Memory possible, not crappy RAM.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,127
647
20,060
It is for me.I've heard some users/motherboards are better but my X670E Taichi still takes its sweet time booting due to memory training regardless of the bios. I run the 1.4 launch bios as it is oddly the most stable rom and I get the best boost speeds on more cores (5.75 GHz on eight cores, 5.7GHz on two cores and 5.675GHz on the remaining six cores with the 1.4 launch bios vs 5.75GHz on two cores, 5.725GHz on six cores, and 5.675GHz on the remaining eight cores with latest 1.11 non-beta bios) but even the latest non beta 1.11 takes forever and a day training memory. So I just run 1.4 for now...waiting for something better to come down the pipe. Plus the 1.11 bios is very unstable for me oddly. It gives the IO controller too much voltage for the DDR5 6000mhz ram in EVO mode in one setting going from auto to redlining it at 1.3v which you can tweak manually to the old settings but crashes were still pretty common despite that. Also, just a me thing, I have to loosen my memory timings from 30-40-40-96 to 30-41-41-96 in both to get everything to play nice and not crash with 64GB or ram.
How many DIMMs are you running?
 
The RAW Random Read/Write performance and consistency of Optane for a OS Drive use case is why I find Optane appealing.
For sure Optane is great for an OS drive, the problem comes in with capacity. 256GB is the smallest you want to go for an OS drive on the consumer side. Windows 10 or 11 is larger than Server 2019 or Server 2022. When I build a VM based on Windows Server, most of the OS drives are only 50GB in size because the OS is only 10-20GB total. The same cannot be said for consumers.

But on the main stream consumer side, that's where I'm focusing on finding use cases for Optane.
The biggest hurdle for Optane on consumer side is cost. Back when you could get the 32GB Optane memory, it was $77. Now you could configure it to work as a caching device for your HDD and it did that quite well. However, you needed the software in order to use it correctly. Much like using Optane for nonvolatile storage, the software has to be aware of it for it to be useful.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,127
647
20,060
For sure Optane is great for an OS drive, the problem comes in with capacity. 256GB is the smallest you want to go for an OS drive on the consumer side. Windows 10 or 11 is larger than Server 2019 or Server 2022. When I build a VM based on Windows Server, most of the OS drives are only 50GB in size because the OS is only 10-20GB total. The same cannot be said for consumers.
Main-Stream Windows needs to go on a diet, it's install foot print is too bloated IMO.
I'm sure we can agree to that.

The biggest hurdle for Optane on consumer side is cost. Back when you could get the 32GB Optane memory, it was $77. Now you could configure it to work as a caching device for your HDD and it did that quite well. However, you needed the software in order to use it correctly. Much like using Optane for nonvolatile storage, the software has to be aware of it for it to be useful.
That's the issue, it was proprietary as heck to Intel.

Optane should be directly attached onto the HDD controller as a large Read/Write Cache buffer & Power Outage buffer.

I'd rather have (Optane/3DXpoint/QuantX <- whichever marketing name you want to use) be used openly by all the memory vendors and fully integrated into their products w/o the need for drivers or software.

It should "Just Work" when you attach your device to the PC.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,127
647
20,060
Treat Optane as the fastest storage possible at low QDs but slow when compared to RAM. There are other non-volatile options for RAM just getting them to the cost and density of DRAM is the issue. Two possible replacements for DRAM are MRAM and FeRAM.
Until MRAM & FeRAM pans out, we are stuck with DRAM & Optane as viable technology paths.

But for the "Every Day" PC user, I want Optane as my OS drive.

I agree with Wendell from Level1Techs that Optane was HORRIBLY handled by Intel and they didn't use it to it's actual strengths and mismanaged the tech.

Time to sell it to all the Memory Vendors and work together to find good use cases for it.
 
Main-Stream Windows needs to go on a diet, it's install foot print is too bloated IMO.
I'm sure we can agree to that.
I agree with that. However, one thing I have learned is that no matter how powerful the hardware gets or how much storage you have programmers will find a way to piss it away. Basically they write very bloated code.

Optane should be directly attached onto the HDD controller as a large Read/Write Cache buffer & Power Outage buffer.
If this were 2010 and SSD itself was still very expensive then I would agree. However, with the price of SSD it doesn't make sense to do that to HDDs. HDDs are now used for bulk storage and backups. Back in 2015 HGST, now they are WD, made a media cache that used spare area on the disk platter to increase write performance. Basically it worked in such a way that they were able to wait until they had a QD of 256 and then write all the data at once. This ended up giving them excellent performance, at least for HDDs, in actual use. They were able to get 250MB read/write speeds most of the time and increased IOPS. It ended up that their 7200RPM drive had performance of a 10k drive and their 10k was that of a 15k. In the end they were limited in performance by the actuator and still today that is the biggest limiting factor for HDDs. This is why you are seeing drives coming out with dual actuators and that doubles their performance. Adding Optane to this will perhaps make your burst speed better but you will have diminishing returns.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,127
647
20,060
I agree with that. However, one thing I have learned is that no matter how powerful the hardware gets or how much storage you have programmers will find a way to piss it away. Basically they write very bloated code.
Blame MS & CompSci Universities who aren't pushing C/C++ as the basic Programming language everybody should be using to learn on.

They're wanting more abstracted / slower programming languages like Java & Python that abstracts memory management away from the programmer.

On top of that, you have many layers of API that are stacked on top of each other along with programming for the Web as the default way to learn programming these days.

Go look at MS Teams, it's BLOATED as hell & Super Slow for a basic IM client.
Back in the 90's, we have super fast IM clients written in C/C++ that took little to no memory & storage.

Now everybody is peddling easy/cheap coders who use very abstracted languages and programming API's that are stacked on many layers.

It's getting too slow because they're trying to peddle out Mc-Degrees at cheapo Uni's & Coding Training camps.

If this were 2010 and SSD itself was still very expensive then I would agree. However, with the price of SSD it doesn't make sense to do that to HDDs. HDDs are now used for bulk storage and backups. Back in 2015 HGST, now they are WD, made a media cache that used spare area on the disk platter to increase write performance. Basically it worked in such a way that they were able to wait until they had a QD of 256 and then write all the data at once. This ended up giving them excellent performance, at least for HDDs, in actual use. They were able to get 250MB read/write speeds most of the time and increased IOPS. It ended up that their 7200RPM drive had performance of a 10k drive and their 10k was that of a 15k. In the end they were limited in performance by the actuator and still today that is the biggest limiting factor for HDDs. This is why you are seeing drives coming out with dual actuators and that doubles their performance. Adding Optane to this will perhaps make your burst speed better but you will have diminishing returns.
But Burst Speeds & Reads/Writes are what everybody worries about in the SSD world.
Once you get past that large SLC cache, performance for SSD's tank dramatically.
Having Optane attached to the HDD controller board would act like a large SLC cache would for NAND Flash based SSD's.

Realistically, given most consumers these days, how often are you sending in Multi-GigaByte files?

Are you sending in more than 32 GiB in one large burst?

Realistically, having 256 GiB of Optane on 20 TB of HDD is more than enough for most work loads where the end user could possibly flood a HDD Controller by accident because they weren't paying attention.

That large optane buffer would buy alot of time for the Actuators & Servo's to do their job.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,127
647
20,060
Just two. 32GB x2 DDR5 6000mhz. My understanding is the IMC for Zen 4 does not like running four dimms and when it does it defaults to the base spec and even then it can be unstable... but yeah that's mine below...

https://www.gskill.com/product/165/390/1665020865/F5-6000J3040G32GX2-TZ5NR
It doesn't like 4x DIMMs, but as you get larger in RAM capacity, you might have to loosen the timings to maintain stability.

From my understanding, DDR5 Memory Controllers weren't designed for 2x DIMMs attached to one Memory Channel.

That practice during previous DDR eras of having 2x DIMMs per Memory Channel was supposed to be "Obsoleted & Deleted".

But the market powers wanted that feature, so they had to put it in.
 
  • Like
Reactions: atomicWAR
Back in the 90's, we have super fast IM clients written in C/C++ that took little to no memory & storage.
Also back in the 90s you didn't have the resources available so code had very little documentation inside the program. Remember that making comments in programs takes up RAM. You are now taught to make a lot of comments so that anyone can pickup the code and understand what you are trying to do.

At one school i went to the first language we learned was Java but another one I went to we learned C# first.

Realistically, given most consumers these days, how often are you sending in Multi-GigaByte files?
Not often. That said look at cost again. 32GB Optane was $77 plus a decent 1TB HDD is another $50 so a HDD + Optane runs $127 we can say $100 if we want. Now a decent PCIe 4.0 1TB NVMe drive runs $70. That NVMe SSD will run circles around any HDD + Optane solution all while being cheaper. Sure once your run low only pSLC the writs are slower, but we are still looking at SATA3 level for writes. No matter how you look at it there is no good solution for HDD + Optane unless it was SIGNIFICANTLY cheaper than just an SSD.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,127
647
20,060
Also back in the 90s you didn't have the resources available so code had very little documentation inside the program. Remember that making comments in programs takes up RAM. You are now taught to make a lot of comments so that anyone can pickup the code and understand what you are trying to do.
Once you compile your code, all the comments are automatically stripped out.
As far as source code commenting styles, I was always taught to document everything, so my comments & naming convention read like a story.
My team mates were always angry that my coding style was "Too Verbose" and that they had to type too many characters where they went the extreme opposite end and didn't comment well and naming conventions were obtuse for the sake of it.

At one school i went to the first language we learned was Java but another one I went to we learned C# first.
My condolences that you learned Java as your first language.
C# is the superior language IMO.
But C/C++ is my favorite & amongst the fastest languages on the planet.

Programming Languages Benchmarks Game

cmVJHnu.png

Not often. That said look at cost again. 32GB Optane was $77 plus a decent 1TB HDD is another $50 so a HDD + Optane runs $127 we can say $100 if we want. Now a decent PCIe 4.0 1TB NVMe drive runs $70. That NVMe SSD will run circles around any HDD + Optane solution all while being cheaper. Sure once your run low only pSLC the writs are slower, but we are still looking at SATA3 level for writes. No matter how you look at it there is no good solution for HDD + Optane unless it was SIGNIFICANTLY cheaper than just an SSD.
If you're having a smaller HDD, you use 16 GB Optane, you only use 32 GB Optane for larger HDD capacities.
And in this day and age, no HDD should be manufacturerd below 2 TB.
The profit margins on those are slim on the 1 TB HDD, and 2 TB should be the new minimum
But you should scale the Optane included with the capacity of the HDD as it goes up.

HDD's major selling point is Price per GiB or Price per TiB, maintaining that while closing the gap on Sustained Read/Write speeds and offering value with Optane is what matters.
 
  • Like
Reactions: TJ Hooker

atomicWAR

Glorious
Ambassador
It doesn't like 4x DIMMs, but as you get larger in RAM capacity, you might have to loosen the timings to maintain stability.

From my understanding, DDR5 Memory Controllers weren't designed for 2x DIMMs attached to one Memory Channel.

That practice during previous DDR eras of having 2x DIMMs per Memory Channel was supposed to be "Obsoleted & Deleted".

But the market powers wanted that feature, so they had to put it in.

So yeah were saying the same thing... though I was unaware of the design issue with dual dimms on DDR5 so thanks for that. But yeah loosening the timing is exactly what I did. Its one of the first things typically tinker with if I find any instability at stock or OC (ram). It just seems more stable on the launch bios so I've stuck with it. I imagine if I loosened the timings further the 1.11 might be more stable but I don't see a need to down grade my timings, core boosts, number of cores that boost higher, etc at this point until there is something a bios offers I need or fixes an issue I have/can't compensate for. Not the first time a launch bios suited me best for awhile though it is usually not the case after the third or fourth release like now...
 

KyaraM

Admirable
Mar 11, 2022
1,465
638
6,690
I very much agree bashing bit and was my whole point in my first post. Intel fanboys are out there yes I didn't deny it but thay are a minority if not a vocal one as are AMD fanboys. Most on Toms being hardware nuetral WAS my point in my first post. I just didn't deny the fanboys have been up in arms as of late was all, they have on both sides. Fanboys suck. I don't get being a slave to a corperation who cares only for your money. IDC which company it is. They are all bottom line and have zero urge to be best friends for ever. I agree more folks need to understand that. Sorry if you took me as attacking Intel directly...anything but. Intel and AMD are both duplicitous and I make no mistake otherwise.

Edit: Also much of what you said has nothing to do with what I've personally said in the past. So I don't get your 'whole' arguement only some of it. I stand by fanboys do no one any good. I've run Intel 18 straight years til very recently on my latest build when AMD finally earned the nod. Both cpu archs from amd/intel have great value it was a tough choice. Idk what else to say.
I apologize for coming off so strong. At first reading in the morning, your post seemed to align alot more with the previous, very blind one I quoted. I later reread and realized I was wrong, and changed a few bits, but was quite busy today and didn't have the time to go through it all. I tried to change it into an addition to your post but admittedly did a very sloppy job. I completely agree with what you said. Have said the same for ages, in fact. I also got both AMD and Intel systems and usually buy what I need with little regard to the company, with one exception due to experiences. Both chip designs are marvels of engineering, which is part of the badmouthing angers me so much, as well as the attitude I described. But I really should stop now.

Again, I am very sorry for the above post and sloppy editing. It was a misunderstanding on my part.
 
  • Like
Reactions: atomicWAR

abufrejoval

Reputable
Jun 19, 2020
189
82
4,660
STOP!

Please stop putting V-cache and 5.7GHz in a single sentence, because that's quite clearly misleading!

There won't be any CCDs that combine both; it will need to be an XOR choice between those two: it's either V-cache or record clocks and the clearest proof of that lies in the fact that AMD provides those features in two very distinct CCD dies on a single carrier with the two top chips.

One CCD will be the ordinary non-3D die offering the high clocks of the 7950X, the other will be a CCD with the extra 64MB of 3rd level cache added via vias, but it won't clock higher than 5 GHz nor listen to any attempts to fiddle with those overclock settings: if PBO is supported, it will be lop-sided, only.

It's both a very wise choice and a bit frustrating, because there are just very few miracles to be had in CMOS chips, except the very miracle of them existing and continuing to move to finer process sizes and ever more advanced packaging.

Quite literally the only technically viable path for a 7950X3D is to take a 7950X binned CCD and stick it on a carrier together with a 7800X3D CCD and a neutral IOD. The 3D CCD might actually have a slightly better bin with lower voltages and wattage than an ordinary 7800X3D, but there is no thermal headroom nor does the thermal stress of the extra 64MB with the via bonding between those dies disappear.

So please use your head and stop messing with user expectations.

One could almost think you're trying to lead to a "Bulldozer"-like story of "failed promises", when the combination you pushed, can't actually be delivered by physics.

Yes, you can have V-cache and you can have cores clock near 6 GHz, but never both on the same core and most certainly not on every core at once. Because that would probably require around 1000 Watts and that just cannot be cooled with any material known to man from such a small surface area, even in outer space at the dark side of the moon.
 
  • Like
Reactions: KyaraM
STOP!

Please stop putting V-cache and 5.7GHz in a single sentence, because that's quite clearly misleading!

There won't be any CCDs that combine both; it will need to be an XOR choice between those two: it's either V-cache or record clocks and the clearest proof of that lies in the fact that AMD provides those features in two very distinct CCD dies on a single carrier with the two top chips.

One CCD will be the ordinary non-3D die offering the high clocks of the 7950X, the other will be a CCD with the extra 64MB of 3rd level cache added via vias, but it won't clock higher than 5 GHz nor listen to any attempts to fiddle with those overclock settings: if PBO is supported, it will be lop-sided, only.

It's both a very wise choice and a bit frustrating, because there are just very few miracles to be had in CMOS chips, except the very miracle of them existing and continuing to move to finer process sizes and ever more advanced packaging.

Quite literally the only technically viable path for a 7950X3D is to take a 7950X binned CCD and stick it on a carrier together with a 7800X3D CCD and a neutral IOD. The 3D CCD might actually have a slightly better bin with lower voltages and wattage than an ordinary 7800X3D, but there is no thermal headroom nor does the thermal stress of the extra 64MB with the via bonding between those dies disappear.

So please use your head and stop messing with user expectations.

One could almost think you're trying to lead to a "Bulldozer"-like story of "failed promises", when the combination you pushed, can't actually be delivered by physics.

Yes, you can have V-cache and you can have cores clock near 6 GHz, but never both on the same core and most certainly not on every core at once. Because that would probably require around 1000 Watts and that just cannot be cooled with any material known to man from such a small surface area, even in outer space at the dark side of the moon.
Looked at articles and can't find were they said you get v-cache and 5.7ghz on same core?
 

atomicWAR

Glorious
Ambassador
I apologize for coming off so strong. At first reading in the morning, your post seemed to align alot more with the previous, very blind one I quoted. I later reread and realized I was wrong, and changed a few bits, but was quite busy today and didn't have the time to go through it all. I tried to change it into an addition to your post but admittedly did a very sloppy job. I completely agree with what you said. Have said the same for ages, in fact. I also got both AMD and Intel systems and usually buy what I need with little regard to the company, with one exception due to experiences. Both chip designs are marvels of engineering, which is part of the badmouthing angers me so much, as well as the attitude I described. But I really should stop now.

Again, I am very sorry for the above post and sloppy editing. It was a misunderstanding on my part.

Absolutely no worries. We all have posts like that. I did one tired the other night that was off base and the other month during the AMD annoucement holiday pricing I incorrectly hit Jarred for not quoting a price right in his article, lol. So trust me when I say I understand and it happens. Your note means a lot though it seemed truly heart felt, thank you.

Yeah fanboyism is the bane of those trying to give good advice online. I've had a bit of everything on the CPU side over the years. Folks should buy what they need not a brand, the best or any other simpleton way of viewing tech. Its never that easy but you need not make it that hard either! Anyways have a good one and take care...
 
  • Like
Reactions: KyaraM