News AMD Ryzen 9 7950X vs Intel Core i9-13900K Face Off

ManDaddio

Reputable
Oct 23, 2019
99
59
4,610
But AMD Fanboys will ignore the data.

And every other YouTube channel and media outlet will say but amd's 3D chips will be coming out. "Wait".

That's all everyone says when Intel beats AMD is to wait for AMD stuff to come out. I know because I've been watching the show for decades.

Even when AMD chips were crap people kept complaining about the price of Intel and we're telling people to buy the bargain. I listened unfortunately and missed out on a decade of great Intel chips. All the while I had to keep buying a new AMD chip to keep up. Therefore, did I really save money?
The older Intel chips we know now people were and are still effectively using 10 years running.
Meh....
 

kiniku

Distinguished
Mar 27, 2009
246
68
18,760
But AMD Fanboys will ignore the data.

And every other YouTube channel and media outlet will say but amd's 3D chips will be coming out. "Wait".

That's all everyone says when Intel beats AMD is to wait for AMD stuff to come out. I know because I've been watching the show for decades.

Even when AMD chips were crap people kept complaining about the price of Intel and we're telling people to buy the bargain. I listened unfortunately and missed out on a decade of great Intel chips. All the while I had to keep buying a new AMD chip to keep up. Therefore, did I really save money?
The older Intel chips we know now people were and are still effectively using 10 years running.
Meh....
Your point is valid but I wouldn't call them fan boys, I'd call them choice boys. Some people feel they are being taken advantage of when only one manufacturer of a product is on top. So they are willing to go with an underdog out of principle even if they end up with the lesser product just to not follow "the sheep". But I wholeheartedly disagree with your sentiment that you gave up a decade with AMD. Ryzen forced Intel to stop gouging for 4 core CPUs for a "decade" and they made the CPU market competitive again. We can thank AMD also for putting multiple cores affordably on the market while Intel was telling us "you don't need them".
 
Nov 22, 2022
11
1
15
We put the Core i9-13900K and Ryzen 9 7950X through a six-round fight to see who comes out on top.

AMD Ryzen 9 7950X vs Intel Core i9-13900K Face Off : Read more

Its A Funny things that monolitic 10nm chip win price and performance over chiplet 5nm + 6nm that go faster clockspeed, more transistor counts, and of course lower cost production.

AMD need to redesign its CPU to make better price per performance ratio to fight overall cost of the system. its not a good idea to upgrade with spending 100% more money ( Mobo and DDR5 + ryzen chip) and just get +-35% performance ( 13% IPC and 25% clock speed). They need to make higher performance CPU with same price point. I think they can do with make single CCD consist of 16 cores and replace RDNA2 with L3 chace on IOD.

Multiple CCD makes more latency, and i think AMD NEED TO make one CCD with 16 cores whithout L3 chace that will make benchmark faster in all games and application. Some games run faster in one CCD than two CCD with higher number of Cores. And like AMD have done in rx7900 series,thet was placing L3 in IOD. Because placing 32 MB of L3 chace in an expensive 5nm silicon CCD is VERY BAD idea. Remove RDNA 2 IGP on IOD and replace it with two block of L3 chaces is good idea. Integrated graphic only needed in LOW BUDGET and OFFICE System that will use only cheap DDR4 and cheap Mobo. Or if they want to get more advanced still use ryzen 7xxx with RDNA2 IGP. Mainstream and high end will only use Discrete PCIE graphic. Also enable two more cores in ryzen 5 (8600X or somewhat) to compete with intelwho has more e-cores (6P + 4E) inside core-i5 (a number of e-core in turbo mode can beat more than one P-cores). Ryzen 5 8600X (or whatever) will have 8 cores and 105 Watt and half of L3 chace enabled at IOD. And ryzen 7 8700 XTX(or X3D or something) will have same core counts (8 cores) but doubled L3 chace size (all L3 block enabled at IOD) and also higher 170W of power envelope. Ryzen 9 8990XTX (or anything that will they say) will enables all 16 cores and all L3 chaces block at maximum 170W. I think that will be a good ideas to make better price per performance value choices combine with insane Mobo And DDR5 prices, then people get more valuable choices.
 

atomicWAR

Glorious
Ambassador
Its A Funny things that monolitic 10nm chip win price and performance over chiplet 5nm + 6nm

Except this article is not consideting the price drops AMD announced. A 7950X is now only 574 not 699...for example. This article needs a minor update ASAP over wrong pricing... as this pricing difference changes the outcome considerably imho.

Edit: I am blind don't listen to my price complaints.
 
Last edited:

fball922

Distinguished
Feb 23, 2008
179
24
18,695
I will gladly admit to being a fan of AMD, though probably not "fanboy" because I have owned Intel products as well. I hold a grudge against Intel for their shenanigans with OEMs through their rebate programs in the early 2000s, it set competition (and consumer choice) back in a big way.

Anyway, the 7k series is definitely underwhelming, especially for the price. I think there are probably a few things at play here- 1) wafer prices have gone up for AMD, 2) AMD feels like the big dog after the last generation and thinks they can justify a price premium, 3) they are not overly concerned about desktop demand (a shrinking market) as they would prefer to keep it restrained to dedicate limited manufacturing capacity to their server line where they are making most of their money.

Just my 2c.
 
  • Like
Reactions: King_V

PaulAlcorn

Managing Editor: News and Emerging Technology
Editor
Feb 24, 2015
858
315
19,360
Except this article is not consideting the price drops AMD announced. A 7950X is now only 574 not 699...for example. This article needs a minor update ASAP over wrong pricing... as this pricing difference changes the outcome considerably imho.

At the time of publishing, the article mentions that in three different sections:
1.) Features and Specification
2.) Pricing
3.) Conclusion

I even put it in italics in the first section so you wouldn't miss it :p
 

rluker5

Distinguished
Jun 23, 2014
620
371
19,260
I have a 13900kf.
For overclocks directed towards performance in applications with variable loads like gaming (really my most important heavy use because I don't use my pc for work) I favor having faster single core and have increased multicore since it doesn't increase total heat output that much and adds to performance in applications that use transient number of cores.
For my seemingly average 13900kf I run daily:
1p 6.0ghz +20mv
2 6.0 +20
3 5.9 +15
4 5.8 +10
5 5.8 +10
6 5.7 ---
7 5.6 ---
8 5.6 ---
1-4e 4.6 ---
5-16e 4.5 ---
LLC 5(Asus scale)
Balanced power plans usually.
I found I needed the extra volts with transient clocks with a couple games. Also 5.6 thermal throttles under stress tests and my chip will only hold 5.4 with my $60 280mm AIO before thermal throttle for 40-41k CB23. But it works for games flawlessly and I have no issues with heat from the increased non all core clocks.

On paper this is technically a 5.6Ghz OC bit averages 225Mhz over stock where a flat 5.6Ghz would be 25mhz over the stock 5.8/2, 5.5/8 stock clocks so there is some potential variability with OC testing that is not easy to explain away because a lot depends on Si lottery, mobo manufacturer and a lot of other things. If you listed everything like me it would bore a lot of readers as I likely have with this post. -At least it will get mostly covered up unless you want to read more :p

But favoring higher single core benefits both designer's chips and I recommend it for Intel as well.
 

rluker5

Distinguished
Jun 23, 2014
620
371
19,260
Its A Funny things that monolitic 10nm chip win price and performance over chiplet 5nm + 6nm that go faster clockspeed, more transistor counts, and of course lower cost production.

AMD need to redesign its CPU to make better price per performance ratio to fight overall cost of the system. its not a good idea to upgrade with spending 100% more money ( Mobo and DDR5 + ryzen chip) and just get +-35% performance ( 13% IPC and 25% clock speed). They need to make higher performance CPU with same price point. I think they can do with make single CCD consist of 16 cores and replace RDNA2 with L3 chace on IOD.

Multiple CCD makes more latency, and i think AMD NEED TO make one CCD with 16 cores whithout L3 chace that will make benchmark faster in all games and application. Some games run faster in one CCD than two CCD with higher number of Cores. And like AMD have done in rx7900 series,thet was placing L3 in IOD. Because placing 32 MB of L3 chace in an expensive 5nm silicon CCD is VERY BAD idea. Remove RDNA 2 IGP on IOD and replace it with two block of L3 chaces is good idea. Integrated graphic only needed in LOW BUDGET and OFFICE System that will use only cheap DDR4 and cheap Mobo. Or if they want to get more advanced still use ryzen 7xxx with RDNA2 IGP. Mainstream and high end will only use Discrete PCIE graphic. Also enable two more cores in ryzen 5 (8600X or somewhat) to compete with intelwho has more e-cores (6P + 4E) inside core-i5 (a number of e-core in turbo mode can beat more than one P-cores). Ryzen 5 8600X (or whatever) will have 8 cores and 105 Watt and half of L3 chace enabled at IOD. And ryzen 7 8700 XTX(or X3D or something) will have same core counts (8 cores) but doubled L3 chace size (all L3 block enabled at IOD) and also higher 170W of power envelope. Ryzen 9 8990XTX (or anything that will they say) will enables all 16 cores and all L3 chaces block at maximum 170W. I think that will be a good ideas to make better price per performance value choices combine with insane Mobo And DDR5 prices, then people get more valuable choices.
That 32MB cache helps single core a ton, more than more cores would for sure. Look at the 300-500 series improvement where most of that was going from having 16MB L3 available per core to 32MB L3 available per core due to CCD rearrangement. Also the low cache Ryzen mobile/G performs close to Skylake arch in games while Ryzen desktop with more cache is way past that.

Maybe taking the cache out for their version of e-cores would work, main chiplet with cache, and backup additional compute chiplet without. But you would have to fix Windows to get it to use the right chiplet first. Right now it isn't working the best with that.
 
  • Like
Reactions: atomicWAR
Jul 7, 2022
588
551
1,760
Its A Funny things that monolitic 10nm chip win price and performance over chiplet 5nm + 6nm that go faster clockspeed, more transistor counts, and of course lower cost production.

AMD need to redesign its CPU to make better price per performance ratio to fight overall cost of the system. its not a good idea to upgrade with spending 100% more money ( Mobo and DDR5 + ryzen chip) and just get +-35% performance ( 13% IPC and 25% clock speed). They need to make higher performance CPU with same price point. I think they can do with make single CCD consist of 16 cores and replace RDNA2 with L3 chace on IOD.

Multiple CCD makes more latency, and i think AMD NEED TO make one CCD with 16 cores whithout L3 chace that will make benchmark faster in all games and application. Some games run faster in one CCD than two CCD with higher number of Cores. And like AMD have done in rx7900 series,thet was placing L3 in IOD. Because placing 32 MB of L3 chace in an expensive 5nm silicon CCD is VERY BAD idea. Remove RDNA 2 IGP on IOD and replace it with two block of L3 chaces is good idea. Integrated graphic only needed in LOW BUDGET and OFFICE System that will use only cheap DDR4 and cheap Mobo. Or if they want to get more advanced still use ryzen 7xxx with RDNA2 IGP. Mainstream and high end will only use Discrete PCIE graphic. Also enable two more cores in ryzen 5 (8600X or somewhat) to compete with intelwho has more e-cores (6P + 4E) inside core-i5 (a number of e-core in turbo mode can beat more than one P-cores). Ryzen 5 8600X (or whatever) will have 8 cores and 105 Watt and half of L3 chace enabled at IOD. And ryzen 7 8700 XTX(or X3D or something) will have same core counts (8 cores) but doubled L3 chace size (all L3 block enabled at IOD) and also higher 170W of power envelope. Ryzen 9 8990XTX (or anything that will they say) will enables all 16 cores and all L3 chaces block at maximum 170W. I think that will be a good ideas to make better price per performance value choices combine with insane Mobo And DDR5 prices, then people get more valuable choices.

Let me inform you of why.

First, Intel shareholders are very frustrated right now because profit is significantly down since alderlake came out because margins have been decreased significantly in order to price their processors at their respective price points to stave off market loss to AMD. Intel will 100% increase their prices for next generation due to the prospect of shareholders suing Intel for dereliction of duty to their shareholders.

Second, AMD does not have in house fabrication to build their processors. They have to purchase wafers, lithography, and packaging from 3rd party companies that demand healthy margins for their services. That is the main reason why AMD pursued chiplet based products, in order to partially counter the added costs of 3rd party manufacturing.

Third, AMD did not implement compatibility with both ddr4 and ddr5 because their socket generations last 2.5 times longer than intel’s. Next generation, Intel will be ddr5 only when they switch sockets for the next 2 year cycle, whereas AMD’s design philosophy includes a socket with a 5 year lifespan and AMD purchasers would be very angry that their 5 year relevant motherboards are not compatible with zen 5, 6, etc. because they bought a ddr4 600 series board. From a design perspective, since the am5 socket will last significantly longer than Intel’s 2 year sockets, it makes no sense to include ddr4 as an end of line motherboard design when their marketing pushes “buy one motherboard and have 5 years of easy cpu upgrades on each of their sockets.

If you do an actual cost analysis between Intel and AMD, AMD is cheaper in the long run to stay up to date (IE: AMD=buy one motherboard, set of ddr5, and cpu then buy the last generation of cpu compatible with am5 to stay relevant, whereas Intel = buy raptor lake, set of ddr4 to save $50-100, ddr4 motherboard to save another $100, then 4 years later having to purchase new gen cpu, new set of ddr5, and a new motherboard) and this scenario is 2 cpu purchases on AM5, if you are a power user and anticipate upgrading at each new generation it’s one motherboard, one set of ddr5, and 4 CPU’s for AMD versus 3 motherboards, ddr4 and ddr5, and 4 CPU’s)
 
But AMD Fanboys will ignore the data.

And every other YouTube channel and media outlet will say but amd's 3D chips will be coming out. "Wait".

That's all everyone says when Intel beats AMD is to wait for AMD stuff to come out. I know because I've been watching the show for decades.

Even when AMD chips were crap people kept complaining about the price of Intel and we're telling people to buy the bargain. I listened unfortunately and missed out on a decade of great Intel chips. All the while I had to keep buying a new AMD chip to keep up. Therefore, did I really save money?
The older Intel chips we know now people were and are still effectively using 10 years running.
Meh....
It was nice to upgrade 3 different CPU generations on one motherboard.
 

rluker5

Distinguished
Jun 23, 2014
620
371
19,260
Let me inform you of why.

First, Intel shareholders are very frustrated right now because profit is significantly down since alderlake came out because margins have been decreased significantly in order to price their processors at their respective price points to stave off market loss to AMD. Intel will 100% increase their prices for next generation due to the prospect of shareholders suing Intel for dereliction of duty to their shareholders.

Second, AMD does not have in house fabrication to build their processors. They have to purchase wafers, lithography, and packaging from 3rd party companies that demand healthy margins for their services. That is the main reason why AMD pursued chiplet based products, in order to partially counter the added costs of 3rd party manufacturing.

Third, AMD did not implement compatibility with both ddr4 and ddr5 because their socket generations last 2.5 times longer than intel’s. Next generation, Intel will be ddr5 only when they switch sockets for the next 2 year cycle, whereas AMD’s design philosophy includes a socket with a 5 year lifespan and AMD purchasers would be very angry that their 5 year relevant motherboards are not compatible with zen 5, 6, etc. because they bought a ddr4 600 series board. From a design perspective, since the am5 socket will last significantly longer than Intel’s 2 year sockets, it makes no sense to include ddr4 as an end of line motherboard design when their marketing pushes “buy one motherboard and have 5 years of easy cpu upgrades on each of their sockets.

If you do an actual cost analysis between Intel and AMD, AMD is cheaper in the long run to stay up to date (IE: AMD=buy one motherboard, set of ddr5, and cpu then buy the last generation of cpu compatible with am5 to stay relevant, whereas Intel = buy raptor lake, set of ddr4 to save $50-100, ddr4 motherboard to save another $100, then 4 years later having to purchase new gen cpu, new set of ddr5, and a new motherboard) and this scenario is 2 cpu purchases on AM5, if you are a power user and anticipate upgrading at each new generation it’s one motherboard, one set of ddr5, and 4 CPU’s for AMD versus 3 motherboards, ddr4 and ddr5, and 4 CPU’s)
Having to upgrade CPUs every year is a new AMD thing.
A 4770k from 2013 will still run 99% of games well over 60 fps and still feels fast with light use. A 13900k with DDR4 and some low end mobo (that has VRMs for 500w because they all do nowadays for some unknown reason) won't need to be replaced for at least that long for the vast majority of users. They could easily skip DDR5 entirely.
Remember how silly it sounded when Huang said "the more you buy the more you save"? That's the case for AM5. Better off with an X3D if you are on AM4 already, or a 5700 if you are a normal user. Those chips are quite fast and will give enough performance that the average user would have a hard time telling the difference in real life until AM5 is EOL. That would be saving.
 

gsxrme22

Distinguished
Jul 27, 2009
15
4
18,515
You really can't loose with either or. AMDs new socket is good for some years and Intel's 13 series socket is dead as of now. What I've personally found was a CPU aka thread bottleneck with having 32 threads on my 7950x and my RTX4090. So many damn games are stuck at 60% CPU load and 50-60% GPU loads. We need the game developers to really step up the game with better game engine support for these new CPUs. Granted I can max everything out at 4k but I could get way better optimization performance then what's currently available. This is not Windows issue, if the game developer has the game configured for 8 threads max optimization then Windows can't pull a rabbit out of its ass. UE4 is good for 16 threads if the dev's care to optimize for it and UE5 is good for 128 threads if they also care to optimize for it.

Only time will tell.... Happy Gaming!

Plus side is my Future proofed
 

joker965

Prominent
Nov 4, 2021
4
1
515
If anyone cares I did some math regarding power consumption and cost. If you use your new shinny computer 8 hours a day for a year then the cost of that electricity needs to be considered. Anyone spending the money on these chips is probably going to use that computer many hours per day. The Intel chip uses 31 Watts more power in heavy use. Based on the assumptions that would work out to ~$20/Year in cost. (8hours/day of heavy use, 15c per kWatt hour) Anybody with more time want to break this down better?
 

rluker5

Distinguished
Jun 23, 2014
620
371
19,260
You really can't loose with either or. AMDs new socket is good for some years and Intel's 13 series socket is dead as of now. What I've personally found was a CPU aka thread bottleneck with having 32 threads on my 7950x and my RTX4090. So many damn games are stuck at 60% CPU load and 50-60% GPU loads. We need the game developers to really step up the game with better game engine support for these new CPUs. Granted I can max everything out at 4k but I could get way better optimization performance then what's currently available. This is not Windows issue, if the game developer has the game configured for 8 threads max optimization then Windows can't pull a rabbit out of its ass. UE4 is good for 16 threads if the dev's care to optimize for it and UE5 is good for 128 threads if they also care to optimize for it.

Only time will tell.... Happy Gaming!

Plus side is my Future proofed
Sounds like a ST bottleneck. Do you have one or 2 threads at 100% and a lot not doing much? Most games still have a primary thread even if they are multithreaded. And of course most won't be able to fully use 32 threads, but still you should be able to max that 4090 at 4k, 1440p, maybe 1080p. I only have a 3080, 1/2 yours, but it maxes out sub 720.
 

gsxrme22

Distinguished
Jul 27, 2009
15
4
18,515
Lets not get into my Fan boy comments. Both Intel and AMD released beasts with the new products. One can fight about 4k gaming 160fps vs my 180 fps but both teams did extremely well and needless to say is damn good. We are no longer at a point of buying the best but still needing to turn settings down to hit 30fps perfectly. Everyone should be happy that Intel and AMD is creating monsters compared to the can it play Crysis days. Even my QX6850 with three GTX8800 Ultras struggled and I spent just as much on that build as I did on this current build.

Lets not forget about the DARK times of gaming. Yep I spent $1800 on three GTX8800 Ultras compared to my $1800+/- RTX4080 Gigabyte watercooled card and it's not a god damn space heater either.

We all need to look back at the past and remember all the crap we went through. ATI 9800XTX with a cute little fan that got piper hot and was my first water block install to get the damn thing to run stable for SWG days.
 

atomicWAR

Glorious
Ambassador
I was mid build when this went down. I returned my 7900X and got the 7950X.

Recently a Newegg 3rd party seller ruined a build I was trying put together. Onda Technologies tried to upsell me 80 dollars on a Seasonic PSU because they switched suppliers mid sale even though the purchase went though...supposedly. Newegg ended up giving me a 25 dollar off rebate due to it as I was actually trying to end my account with them but couldn't due to refunds that had not processed yet. They also promised I wouldn't pay any restocking fees even though some parts had been mailed out.

Anyways still pissed I went Amazon instead for my build. However when Newegg was the first list the sale in the US, I decided to use that rebate and ponder (only ponder for now) giving them a second chance in the long run.

So I returned my 7900X and used that rebate. And for two dollars more than I paid I get 4 more cores... Newegg almost? lost me as a consumer over that 3rd party seller and the fact they pulled there 12 month 0% financing from their credit card after years and years. Time (and how this CPU purchase proceeds) will tell if Newegg has lost me for good. They are slipping but these CPU price drops are great. Honestly these are what launch msrp's should have been.
 
  • Like
Reactions: rluker5

rluker5

Distinguished
Jun 23, 2014
620
371
19,260
If anyone cares I did some math regarding power consumption and cost. If you use your new shinny computer 8 hours a day for a year then the cost of that electricity needs to be considered. Anyone spending the money on these chips is probably going to use that computer many hours per day. The Intel chip uses 31 Watts more power in heavy use. Based on the assumptions that would work out to ~$20/Year in cost. (8hours/day of heavy use, 15c per kWatt hour) Anybody with more time want to break this down better?
The Intel one does use more power in heavy use. But to be relevant you need to look at how you will be using the chip for your "8 hours a day".
Most of these heavy uses are better performed on a more efficient GPU, for time, part cost and power. My typical use is frame capped gaming, where Intel is fairly even, and light use where HwInfo64 tells me my overclocked 13900kf is not even using 10w right now.

The assumption that all people are running CPU render farms with their home PC is flawed.
 

zecoeco

Prominent
BANNED
Sep 24, 2022
83
113
710
Well sorry to say this but i no longer trust this site for benchmarking.
I feel like they've shown a great bias and favoring Intel most of the time.
I dont care who wins but i kinda lost my faith when comparing to other channels/websites.
 

bernardv

Distinguished
Jan 12, 2009
38
7
18,535
But AMD Fanboys will ignore the data.

And every other YouTube channel and media outlet will say but amd's 3D chips will be coming out. "Wait".

That's all everyone says when Intel beats AMD is to wait for AMD stuff to come out. I know because I've been watching the show for decades.

Even when AMD chips were crap people kept complaining about the price of Intel and we're telling people to buy the bargain. I listened unfortunately and missed out on a decade of great Intel chips. All the while I had to keep buying a new AMD chip to keep up. Therefore, did I really save money?
The older Intel chips we know now people were and are still effectively using 10 years running.
Meh....

Data? What about the fact that AMD's new platform will be around for years, several generations to come while Intel is on the way out with this, just cranking up outdated stuff? Maybe that should be considered alongside the price. What about the extra cost of high end cooling required for Intel power hungry chips - where is that data in this article?
 

AndrewJacksonZA

Distinguished
Aug 11, 2011
576
93
19,060
The older Intel chips we know now people were and are still effectively using 10 years running.
FWIW, I'm still rocking my i7-7600 non-K. Only now when paired with an RX 6800 XT is it being a bottleneck, and not that bad. What is bad is that I'm busy converting a large video library to the AV1 codec with my ARC A750 (I have two cards in my system,) and the HUGE bottleneck in Handbrake appears to be the CPU, not the A750.
 

TRENDING THREADS