News AMD Announces Ryzen 7000X3D Pricing: $449 to $699 Starting Feb 28th

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

atomicWAR

Glorious
Ambassador
i am still VERY amused at the blind bias toward intel all over this forum.

I don't know that its the whole forum...BUT Intel fanboys have been up in arms and extremely vocal as of late. Its not hard to see that is for sure. I don't get fanboying for a corporation. They certainly care little for you beyond your dollars. We should all want to encourage stronger competition which fosters innovation otherwise we get a decade of the same old same old, like quad cores...and we all lose. Right now its a great time for CPUs. Competition and innovation has never been stronger in the space IMHO. We should all hope things stay this competitive for the foreseeable future. Imagine where we'd be if Faildozer never happened orARM/Apple silicon had become more competitive a decade or more sooner.
 

atomicWAR

Glorious
Ambassador
I think that the main pratfall with the 5800X3D was that for games that took advantage of the cache, it was the fastest CPU, but because its base and boost clocks were slower than a regular 5950 or 5900, it would suffer for anything that didn't.

The base and boost clocks are almost identical this time out, so there shouldn't be any downside to this versus a normal equivalent 7xxx series.

If AMD can nail scheduling, which I find semi-unlikely but I AM hopeful for...X3D may finally make a compelling argument for gamers to go with a dual CCD setup. Right now only those who are going with multi-use, creator/gamer for example, are lining up for R9 chips...or at least anyone with common sense as eight cores is enough for a strictly gaming build. Point being if you can have a CCD that was perfectly geared to games that desire the fastest IPC games while having the cache intensive games use the slower cache heavy cores...it could be the first solid argument AMD has made that the extra cores in R9 chips are worth the extra cash in gaming! (hehehe)
 
Last edited:
  • Like
Reactions: blppt
My problem with these cache heavy X3D CPU is that benefits are extremely game specific.

There are games that are helped tremendously by more cache because they have data intensive algorithms that are CPU bound.

But there are are also many games that don't benefit from this at all.

And since I have no idea if the games I currently play, or will even play in the near future, will benefit from this, it's just a gamble.
I think over a year of benches for the 3dcache I think we can assume safely that any game which is cpu bound sees a significant bounce in performance.
 

lmcnabney

Prominent
Aug 5, 2022
192
190
760
The base and boost clocks are almost identical this time out, so there shouldn't be any downside to this versus a normal equivalent 7xxx series.
Really?

Ryzen 7 7800X3D$4498 /164.2 / 5.0

Ryzen 7 7700X$349 ($399)8 /164.5 / 5.4
The 7700X is clocked quite a bit higher than 7800X3D. Heck, even 7600X has higher clocks
 
$699? $599???

For the sake of Pete, can't those bean counters at AMD learn one stupid lesson and do something smart for once?? Especially after the DDR 5 debacle.

How about reducing the price of those new CPUs by 20 % ,which most likely will result in higher sales volume - which will more than make up for the slightly lower price?????

They seem to have graduated from the "Let me shoot myself in the foot - REPEATEDLY" college!!
Realistically, no one is likely to have an actual "need" for these highest-end models, so they are not exactly mainstream parts. They are targeting the relatively small market of those who are willing to pay a big premium for the best gaming performance, even if the real-world differences are typically going to be negligible over products costing hundreds of dollars less. Assuming they manage to outperform Intel's comparably-priced models for gaming more often than not, then there will be a market for them.

The 7900x is already just barely faster than the 13700k and you wouldn't think they would allow the 7800x3d to outperform the 7900x for $100 less.
These X3D chips are targeted at cache-heavy workloads like games, where there's currently little benefit from going over 8-cores/16-threads. So for the kinds of heavily-multithreaded workloads you would get a 7900X for, it's still going to beat the 7800X3D by a decent margin, since it offers 50% more cores and threads. In many CPU-limited games though, the 7800X3D should pull ahead of the 7900X, as games will tend to see more benefit from the cache than from additional cores that would be sitting mostly unused. So each processor fullfills different needs. And while the 7900X might have launched with a $100 higher MSRP, it can already currently be found for about the same price as the 7800X3D or a little less at the biggest US online retailers.

So $150 for 4 more cores going from the 7800 to 7900? Great math there amd.
Getting 50% more cores and threads (but with the same amount of V-cache) for a 33% higher price doesn't exactly seem out of line with expectations.
 
  • Like
Reactions: King_V

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
My problem with these cache heavy X3D CPU is that benefits are extremely game specific.

There are games that are helped tremendously by more cache because they have data intensive algorithms that are CPU bound.

But there are are also many games that don't benefit from this at all.

And since I have no idea if the games I currently play, or will even play in the near future, will benefit from this, it's just a gamble.
I think that's why they went with the "Half & Half" strategy.

They don't know which type of game are you going to use, and which game benefits more from Frequency, or massive L3 cache.

So they gave you one core of each.

Problem solved.

If you have a Frequency dependent game, you shove it onto that CCD and Process Lasso it to that CCD.
If you have a Cache dependent game, you shove it onto that CCD and Process Lasso it to that CCD.

Problem Solved.
 
  • Like
Reactions: -Fran-

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
It won't show up on desktop for a while is my guess, but sooner or later it will trickle down.
I'll believe it when I see it.

HBM is very expensive, so until Intel is willing to risk making a CPU with HBM as L4$ and pricing it accordingly.

I doubt Intel will sell it to consumers.

We saw how AMD abandoned HBM from the consumer market and left it as a tool for the Enterprise market.

So until that happens, I HIGHLY doubt Intel will be willing to create a consumer product with HBM.

Intel won't want to charge the high markup for it.

“The adder for GDDR5 versus HBM2 was about 4X,” Ferro tells The Next Platform. “And the reason is not just the DRAM chips but the cost of the interposer and the 2.5D manufacturing. But the good news with HBM is that you get the highest bandwidth, you get very good power and performance, and you get very small footprint area. You have to pay for all of that. But the HPC and hyperscale communities are not particularly cost constrained. They want lower power, of course, but for them, it’s all about the bandwidth

Nobody is questioning HBM3's performance or capabilities or lower power / latency relative to regular RAM.

The bigger question is will Intel be willing to sell it to average mainstream consumers and make a product with it included.

The average markup over regular RAM should be ~4x per GiB.

So how many GiB do you think Intel will include just to get a performance advantage?

Many Low End DDR5 kits are around $6-7 per GiB.
So multiply that by 4x = $24-$28 per GiB.

That would be quite huge on the BoM cost, just to fill in L4$.

HBM3's Memory Capacity for 1x Stack that has between 1-16 Layers using 8 Gb (1 GiB) -> 32Gb (4 GiB) RAM Chips.
Even if you start at 1 GiB of HBM3 L4$, that's $24-28 added to the BoM cost of your CPU.

If you want more, it gets to be MUCH more expensive.
64 GiB = $1536 - $1792 for the HBM3 stack alone.

If you let customers "Custom Order", you'd be basically be buying L4$ (HBM3) with a CPU attached to it.
 
Last edited:

Nikolay Mihaylov

Prominent
Jun 30, 2022
45
46
560
I think that's why they went with the "Half & Half" strategy.

They don't know which type of game are you going to use, and which game benefits more from Frequency, or massive L3 cache.

So they gave you one core of each.

Problem solved.

If you have a Frequency dependent game, you shove it onto that CCD and Process Lasso it to that CCD.
If you have a Cache dependent game, you shove it onto that CCD and Process Lasso it to that CCD.

Problem Solved.

Please also note, that the cores on the regular die do have access to all L3 cache on the 3D cache-enabled die (with Zen2 even the communication between the two quad core complexes on each die had to go via the IO die). I don't know if those cores can evict data into the other die's L3 cache but should certainly be able to pull data from there. So it may come down to proper CPU scheduling, which as someone noted above is not given, but all is not lost either.

I would have prefered for both dies to have 3D cache but AMD are not stupid. I assume they ran the tests and the improvement was marginal, if any.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
Please also note, that the cores on the regular die do have access to all L3 cache on the 3D cache-enabled die (with Zen2 even the communication between the two quad core complexes on each die had to go via the IO die). I don't know if those cores can evict data into the other die's L3 cache but should certainly be able to pull data from there. So it may come down to proper CPU scheduling, which as someone noted above is not given, but all is not lost either.

I would have prefered for both dies to have 3D cache but AMD are not stupid. I assume they ran the tests and the improvement was marginal, if any.
I think the main issue with having 3D vCache on both dies is you lose out on the Frequency boost.

It's actually quite massive.

But given that you have once CCD with 3DvCache, and the other w/o, but with MUCH HIGHER frequency, things balance out.

Also, grabbing data across the CCD boundary has high latency penalties, it's fine when you're grabbing it from within the CCD, but it sucks badly if you have to grab it across CCD.
It's bad enough that you might as well grab it straight from RAM.
WWL3aj7.jpg
VZhqgnr.png
 
I don't know that its the whole forum...BUT Intel fanboys have been up in arms and extremely vocal as of late. Its not hard to see that is for sure. I don't get fanboying for a corporation. They certainly care little for you beyond your dollars. We should all want to encourage stronger competition which fosters innovation otherwise we get a decade of the same old same old, like quad cores...and we all lose. Right now its a great time for CPUs. Competition and innovation has never been stronger in the space IMHO. We should all hope things stay this competitive for the foreseeable future. Imagine where we'd be if Faildozer never happened orARM/Apple silicon had become more competitive a decade or more sooner.
You (plural) are confusing fanboyism with arguing about products, if somebody says that you can put ddr4 on a platform and make it cheaper (or ddr5 and make it faster) and this is actually possible then that is an argument and not fanboyism.
If somebody says that intel uses 125W and that's the only level of performance that they guarantee, that is an argument.
If they say that 253W is the maximum above that that intel allows as a headroom for overclocking, be it automatic or manual, that is an argument.
If they say that intel NEEDS 300W to run, and YOU NEED to provide that level of cooling, that is fanboyism.
Yes, you need to do a full overclock to reach the results of the reviews that show that, but previous intel generations didn't beat zen in everything and made tons of money because people still bought them.

I'll believe it when I see it.

HBM is very expensive, so until Intel is willing to risk making a CPU with HBM as L4$ and pricing it accordingly.

I doubt Intel will sell it to consumers.
It's very expensive now, because almost nobody uses it so almost nobody produces it so it's very expensive.
Intel is building up many FABs, they are going to be making this HBM themselves, what do you think they will be doing with all the modules that fail either the speed or size or any other test?
I expect it to take quite a few years but it will happen.
Everything is expensive only until it isn't anymore.
 
  • Like
Reactions: KyaraM

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
It's very expensive now, because almost nobody uses it so almost nobody produces it so it's very expensive.
Intel is building up many FABs, they are going to be making this HBM themselves, what do you think they will be doing with all the modules that fail either the speed or size or any other test?
I expect it to take quite a few years but it will happen.
Everything is expensive only until it isn't anymore.
Intel is building up many FABs, yes, that much is true.

But they aren't getting into the DRAM / Memory business.

There's a damn good reason why Intel stopped making memory, it's because the profit margins are low on them.

https://www.digitimes.com/news/a20220429VL209/logic-ic-memory-chips.html
In 1984, Intel decided to exit the DRAM business, which marked the start of technology diversion between memories and logic IC. In hindsight, Intel's decision made sense. Memories and logic IC are very different in terms of design, manufacturing processes, equipment deployment, and market characteristics. It is common sense that a company should focus on pursuing excellence in one specific direction.

One of the reasons that Intel restarted the R&D and production of NAND Flash and 3D X-point memory businesses after 2000 was because of the inseparable connection between memories and processors under the von Neumann architecture. It is difficult to categorize the Loihi neuromorphic chip that Intel is now working on because it serves as both a memory and a processor. Another reason is that Intel wanted to leverage storage class memory, at least for a short period, to build core computing architecture and specifications to control more key values in the computing value chain.

Intel has now sold off it's NAND & Optane divisions as well.

Intel shutting down Optane (A huge Mistake IMO, it should be sold off to the big Memory Companies IMO).
https://www.techradar.com/news/intel-is-shutting-down-its-optane-memory-business

That much is also proven by history. So until you can show me that they're willing to enter the DRAM market, they'll be sourcing it from one of the major suppliers.

Be it SK Hynix, Samsung, Micron, etc.

There are far better (Higher Profitability) things to make on those FABs, DRAM isn't one of them.

At the end of the day, it's up to Intel to put the $$$ into making it standard and buying in bulk.

Will Intel risk doing that? Only Pat Gelsinger and the teams inside will know.

But given the BoM cost for HBM, I think it's going to remain a Enterprise & HPC only solution where the costs don't really matter that much and wallets are deeper.

Average Main Stream Consumers are VERY Price sensitive.
 
Last edited:

Nikolay Mihaylov

Prominent
Jun 30, 2022
45
46
560
I think the main issue with having 3D vCache on both dies is you lose out on the Frequency boost.

It's actually quite massive.

But given that you have once CCD with 3DvCache, and the other w/o, but with MUCH HIGHER frequency, things balance out.

Also, grabbing data across the CCD boundary has high latency penalties, it's fine when you're grabbing it from within the CCD, but it sucks badly if you have to grab it across CCD.
It's bad enough that you might as well grab it straight from RAM.
WWL3aj7.jpg
VZhqgnr.png

I'm not going to argue with you. We'll have to wait for the benchmarks either way. But consider the fact that you are comparing core-to-core latency with core-to-memory latency. I am talking about core-to-cache latency.

Also, the other die's cache provides additional bandwidth. Current CPUs can maintain several request simultaneously so extra bandidth helps. Especially considering the fact that DRAM is half-duplex, whereas caches are full duplex, although usually lopsided in favour of reads.

Cheers,
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
I'm not going to argue with you. We'll have to wait for the benchmarks either way. But consider the fact that you are comparing core-to-core latency with core-to-memory latency. I am talking about core-to-cache latency.

Also, the other die's cache provides additional bandwidth. Current CPUs can maintain several request simultaneously so extra bandidth helps. Especially considering the fact that DRAM is half-duplex, whereas caches are full duplex, although usually lopsided in favour of reads.

Cheers,
0xNCaKd.png
aO0AavS.png
Here you go, the charts you asked for.

But my main issue isn't the bandwidth, I know that L3$ has massive bandwidth, it's the latency of cross CCD communication that is the issue.
There's a huge Latency penalty for crossing that CCD boundary, even if you're accessing L3$.
So depends on what data you need to grab and how much of it.
 
Last edited:
  • Like
Reactions: KyaraM
D

Deleted member 14196

Guest
i am still VERY amused at the blind bias toward intel all over this forum.

when the 5000 series was hitting, all we saw was "yah but you can get ddr5 with intel" then the 7000 series came out and it's "yah but you can get ddr4 with intel"

when intel cpu's hit stupid high power draw and kept going higher, creating the need for expensive cooling and mobo's it was " yah but i only care about game performance and price does not matter"

7000 series got much closer to 200w+ than ever before, despite intel having been well above that for years and it was "that's crazy who wants a space heater...."

now we're getting new X3D chips which have shown impressive gaming gains and of course we're back to "yah but not everyone only cares about games" well then how about you don't buy the gaming centered upgrade and move on then.....

the 7000 series prices came down much faster than i expected or have seen before for a new gen chip. i'd expect a couple months to allow for those quick to pull the trigger to pay the early adopter tax and then we'll see some nice discounts. so grab it day 1 and pay the tax, or wait a couple months and enjoy the discount :)
Exactly. I couldn’t agree with you more about this
 

jk47_99

Distinguished
Jul 24, 2012
206
3
18,765
The higher frequency on the 7900X3D and 7950X3D is because they have 2 chiplets, and only 1 chiplet will have access to the 3D v-cache. So the higher frequency is only on the chiplet that is "normal", meaning that much like the P and E cores on Intel, we will rely on Win 11 and the software to choose the right chiplet for the job. I would expect the chipet with the v-cache to clock no higher than the 7800X3D.
 

atomicWAR

Glorious
Ambassador
You (plural) are confusing fanboyism with arguing about products,

No I very much am not and in fact was calling out the previous post claiming the forums were full on intel fanboys. You are putting A LOT of words in my mouth also. I am all for comparing and contrasting products. Its one reasons I love Toms. I want a competitive everyone in the CPU space. BUT the number of blind "intel" is better attitudes has increased expontially since Zen 3 and its hard to miss in any tech forum or tubers comments. I have no love for AMD fanboys either. They can be just as bad if not worse at times riding the hype train. And neither fanboy camp is productive in the end. As stated previously I called out the previous post to mine saying its "everyone" thats an Intel fanboy...because you are right in that many in Toms debate without fanboying but enough don't, to jusitify the previous posters perspective to a point, though still wrong it cannot be ignored altogether imho.
 
Last edited:

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
The higher frequency on the 7900X3D and 7950X3D is because they have 2 chiplets, and only 1 chiplet will have access to the 3D v-cache. So the higher frequency is only on the chiplet that is "normal", meaning that much like the P and E cores on Intel, we will rely on Win 11 and the software to choose the right chiplet for the job. I would expect the chipet with the v-cache to clock no higher than the 7800X3D.
I completely agree with your assessment.

But the more important difference betwee 3DvCache CCD & Normal CCD is that the performance gap should be MUCH smaller if you mis-allocate your Process/Threads to the correct CCD compared to P-core vs E-core.

If Windows screws up Process/Thread assignment, the performance shouldn't "Tank" like if you shoved it onto a E-core.

It should be slower, but by how much? That remains to be seen based on the Real World Measured Performance gap between 3DvCache & Normal CCD.

But I seriously doubt it's going to be nearly as massive as the P-core/E-core disparity.
 
Intel is building up many FABs, yes, that much is true.

But they aren't getting into the DRAM / Memory business.

There's a damn good reason why Intel stopped making memory, it's because the profit margins are low on them.

https://www.digitimes.com/news/a20220429VL209/logic-ic-memory-chips.html


Intel has now sold off it's NAND & Optane divisions as well.

Intel shutting down Optane (A huge Mistake IMO, it should be sold off to the big Memory Companies IMO).
https://www.techradar.com/news/intel-is-shutting-down-its-optane-memory-business

That much is also proven by history. So until you can show me that they're willing to enter the DRAM market, they'll be sourcing it from one of the major suppliers.

Be it SK Hynix, Samsung, Micron, etc.

There are far better (Higher Profitability) things to make on those FABs, DRAM isn't one of them.

At the end of the day, it's up to Intel to put the $$$ into making it standard and buying in bulk.

Will Intel risk doing that? Only Pat Gelsinger and the teams inside will know.

But given the BoM cost for HBM, I think it's going to remain a Enterprise & HPC only solution where the costs don't really matter that much and wallets are deeper.

Average Main Stream Consumers are VERY Price sensitive.
Optane had two goals.
To bring the main storage as close to the CPU as possible, namely into the DIMM slots.
Now we have super fast nvme drives and intel provided their CPUs with extra PCI lanes just for storage so the CPU is connected directly to super fast storage.

And second to reduce CPU overhead by having the main storage act as ram so the CPU would waste less cycles on reading and writing (ssd->ram->cpu->ram->ssd) but with 64Gb of ram right on the CPU you can spend some cycles on caching the storage and still be much faster than using the relatively slower optane dimms directly.

Also if direct i/o oneapi resizable bar and so on can read and write huge chunks of that ram at once without using a lot of cpu cycles then that is even less of an loss.

Everything intel wanted to do with optane they can now do even better.
Intel got rid of optane just in time before it became completely useless for them.
Intel will not be entering dram business but it would also be stupid for them to not use their own fabs to make something as simple as ram/cache which they are already making for all of their CPUs and GPUs anyway.

https://lenovopress.lenovo.com/lp1066-intel-optane-persistent-memory-100-series
  • Memory Mode
    In this mode, the DCPMMs act as large capacity DDR4 memory modules. In such a configuration, the memory that the operating system recognizes is the DCPMMs; the installed TruDDR4 DIMMs are hidden from the operating system and act as a caching layer for the DCPMMs. In this mode, the persistence feature of the DCPMMs is disabled. This mode does not require the application to be DCPMM-aware.
  • App Direct Mode
    In this mode, the DCPMMs provide all persistence features to the operating system and applications that support them. The operating system presents both TruDDR4 DIMMs and DCPMMs to the applications, as system memory and persistent storage respectively.
    Depending on the configuration in UEFI and the operating system, the DCPMMs appear as one of two types of namespaces:
    • Direct access (DAX): byte-addressable storage accessible via an API. The applications must be DCPMM-aware and use the published APIs to implement the DCPMM features.
    • Block storage: the persistent memory is presented to applications is seen as a block storage device, similar to an SSD. The operating system needs to be DCPMM-aware, however the applications do not.
No I very much am not and in fact was calling out the previous post claiming the forums were full on intel fanboys. You are putting A LOT of words in my mouth also. I am all for comparing and contrasting products. Its one reasons I love Toms. I want a competitive everyone in the CPU space. BUT the number of blind "intel" is better attitudes has increased expontially since Zen 3 and its hard to miss in any tech forum or tubers comments. I have no love for AMD fanboys either. They can be just as bad if not worse at times riding the hype train. And neither fanboy camp is productive in the end. As stated previously I called out the previous post to mine saying its "everyone" thats an Intel fanboy...because you are right in that many in Toms debate without fanboying but enough don't, to jusitify the previous posters perspective to a point, though still wrong it cannot be ignored altogether imho.
Yes and I used (plural) because I was mainly answering to the post you quoted, but you where the last post so I quoted you.
 
  • Like
Reactions: atomicWAR

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
Optane had two goals.
To bring the main storage as close to the CPU as possible, namely into the DIMM slots.
Now we have super fast nvme drives and intel provided their CPUs with extra PCI lanes just for storage so the CPU is connected directly to super fast storage.

And second to reduce CPU overhead by having the main storage act as ram so the CPU would waste less cycles on reading and writing (ssd->ram->cpu->ram->ssd) but with 64Gb of ram right on the CPU you can spend some cycles on caching the storage and still be much faster than using the relatively slower optane dimms directly.

Also if direct i/o oneapi resizable bar and so on can read and write huge chunks of that ram at once without using a lot of cpu cycles then that is even less of an loss.

Everything intel wanted to do with optane they can now do even better.
Intel got rid of optane just in time before it became completely useless for them.
Intel will not be entering dram business but it would also be stupid for them to not use their own fabs to make something as simple as ram/cache which they are already making for all of their CPUs and GPUs anyway.
Tell that to Intel, they aren't in the business of RAM / Cache since it isn't "Profitable" for them.

Intel isn't vertically integrated, they'll just outsource the RAM / Cache from somebody else.
Intel wants to focus on core competencies, making CPU's.
Guess what they'll be doing, making CPU's, and hopefully their own GPU's down the line.

I'd be more interested of all the other RAM / NAND Flash companies buy up Intel's shares of Optane and have a merged RAM/Optane Memory Package where you can grab data straight from Optane and shove it into RAM by traveling the least distance possible, by existing in the same Memory Package on a DIMM.

That would be VERY exciting.

Having a Hybrid RAM/Optane DIMM package in the same slot where you can address both simultaneously.

Imagine your OS drive sitting on your DIMM / RAM Sticks.

That would be "WILD". Imagine if hitting "Sleep" on your OS just has the contents of the RAM dumped into Optane within the same RAM Package.

However, compared to DRAM products, the 3D Xpoint memory density is 4.5 times higher than DRAM products with the same 20 nm technology or 3.3 times higher than Samsung’s 1x nm DDR4.

So given the extra Density per Optane Package, 3D stacking Optane onto DRAM within the same RAM/Memory package could lower the energy needed along with having dedicated memory for cold storing the contents of RAM and having room for the OS drive within the DIMM.

Imagine what that could do for LapTops.

Zero Power Drain when put to sleep.

Super Fast WakeUp for LapTops.

Ultra Responsive OS Random Read/Writes & Sustained Throughput since the OS is right next to RAM.
 
Last edited:

KyaraM

Admirable
i am still VERY amused at the blind bias toward intel all over this forum.

when the 5000 series was hitting, all we saw was "yah but you can get ddr5 with intel" then the 7000 series came out and it's "yah but you can get ddr4 with intel"

when intel cpu's hit stupid high power draw and kept going higher, creating the need for expensive cooling and mobo's it was " yah but i only care about game performance and price does not matter"

7000 series got much closer to 200w+ than ever before, despite intel having been well above that for years and it was "that's crazy who wants a space heater...."

now we're getting new X3D chips which have shown impressive gaming gains and of course we're back to "yah but not everyone only cares about games" well then how about you don't buy the gaming centered upgrade and move on then.....

the 7000 series prices came down much faster than i expected or have seen before for a new gen chip. i'd expect a couple months to allow for those quick to pull the trigger to pay the early adopter tax and then we'll see some nice discounts. so grab it day 1 and pay the tax, or wait a couple months and enjoy the discount :)
Yeah, let's ignore all the bashing against Intel architecture going on, the constant calls that Intel is doomed and too stupid to run their business, or how even people with Intel chips criticize the at times extreme power requirements at the top and call everyone a blind Intel fanboy. And the space heater complaint came from the AMD camp, btw, not the Intel camp. Did you spend the last year under a rock or something? The one with the bias here seems to be yourself, and a huge one at that seeing how you completely ignore any and all negative comments about Intel.

Btw, you are just as guilty with moving the goal posts, looking around. For a year I read mockery about how you can't count X because the CPUs have to run at stock - with X being anything from power limiting, undervolting, or overclocking the CPU - for the standard user experience in regards to Intel CPUs whenever it is brought up. But eco mode (basically CPU power limiting easy enough for idiots, since setting a value or two in BIOS is apparently so hard), undervolting etc. are completely fine when it is AMD, still standard user experience somehow! Because there you don't have to change BIOS settings? Pffft. Right. Also, don't even mention Windows programs for that, Intel got them, too, not just AMD. And now looking around in this very topic it's "just use Process Lasso to bind cache-sensitive games to the 3D-cache CCD and clock speed sensitive games to the normal CCD, problem solved!", when the argument was completely rejected ("oh, but I shouldn't have to use a third party tool to bind games to the p-cores exclusively!") whenever it was brought up with Intel's hybrid CPUs over the past year. Inter-CCD latency (which someone, I think Terry, has a nice graph for) is also completely ignored while e-cores are made out to be the biggest issue with gaming ever. But Intel fanboyism is everywhere. LMFAO.

I don't know that its the whole forum...BUT Intel fanboys have been up in arms and extremely vocal as of late. Its not hard to see that is for sure. I don't get fanboying for a corporation. They certainly care little for you beyond your dollars. We should all want to encourage stronger competition which fosters innovation otherwise we get a decade of the same old same old, like quad cores...and we all lose. Right now its a great time for CPUs. Competition and innovation has never been stronger in the space IMHO. We should all hope things stay this competitive for the foreseeable future. Imagine where we'd be if Faildozer never happened orARM/Apple silicon had become more competitive a decade or more sooner.
I recommend them stopping to try make Intel users out as idiots as some here tend to do, and dropping the virtue signaling I got from some AMD fanboys ("I'm so much better than you because I don't support Intel, and you are bad for doing so!"). If you don't attack people, they are less likely to shoot back at you, it's as simple as that. I also don't sense a lick of interest in competition from many AMD fans, just schadenfreude about Intel seemingly doing bad. The stupid bashing and uninformed blubbering about how the CPUs work can also go away. And also stop to act as if AMD never does anything wrong. Should help. AMD cares just as little for you as Intel, big corps are all only out for your money and satisfying their investors. AMD has proven recently (and before, but people also don't want to hear that) that they are no better there, and the sooner people understand that, the better.
 
Last edited:
  • Like
Reactions: CorpRebel
I’ve been buying amd for years since the early 2000s. They were always a good value option to me. I like games as much as the next person, but these prices are outrageous. I realize they are targeted to the high end however.

If I were buying today from amd, I might try to get the free ram deal from microcenter or go for a 7700 non x due to performance vs lower temps/power consumption. That said, I’ve got my 5800x and I’m sitting on that until at least the end of this year.

At that time if I want to buy, I’m considering Intel as well. If the 14600k is priced similar to the 13600k, that might be the value proposition. Right now though I feel like the new cpus aren’t enough of a jump from my 5800x to justify a whole platform upgrade.
 
  • Like
Reactions: KyaraM

oofdragon

Honorable
Oct 14, 2017
231
232
10,960
What people usually refer to as fanboys are actually professionals paid to make negative propaganda of the other company at most read review sites, of course they will always have a reason to like the other brand better.

Now making it short.. the 7950X3D will mop the floor with the 13900KS of course. Whe it comes to productivity there's a graphic there showing 50% quicker compressing files and 25% quicker on adobe premiere pro, so we can expect AMD to trump Intel everywhere.
 
I'd be more interested of all the other RAM / NAND Flash companies buy up Intel's shares of Optane and have a merged RAM/Optane Memory Package where you can grab data straight from Optane and shove it into RAM by traveling the least distance possible, by existing in the same Memory Package on a DIMM.

That would be VERY exciting.
You don't have to imagine, you just have to spend like $900 for a 128Gb dimm module.
Although they might be cheaper by now.
https://www.anandtech.com/show/14180/pricing-of-intels-optane-dc-persistent-memory-modules-leaks
I already quoted this, you can have persistent and normal dimms at the same time in your system, probably even on the same ram channel.
App Direct Mode
In this mode, the DCPMMs provide all persistence features to the operating system and applications that support them. The operating system presents both TruDDR4 DIMMs and DCPMMs to the applications, as system memory and persistent storage respectively.