We put the Core i9-13900K and Ryzen 9 7950X through a six-round fight to see who comes out on top.
AMD Ryzen 9 7950X vs Intel Core i9-13900K Face Off : Read more
AMD Ryzen 9 7950X vs Intel Core i9-13900K Face Off : Read more
The scheduled forum maintenance has now been completed. If you spot any issues, please report them here in this thread. Thank you!
Your point is valid but I wouldn't call them fan boys, I'd call them choice boys. Some people feel they are being taken advantage of when only one manufacturer of a product is on top. So they are willing to go with an underdog out of principle even if they end up with the lesser product just to not follow "the sheep". But I wholeheartedly disagree with your sentiment that you gave up a decade with AMD. Ryzen forced Intel to stop gouging for 4 core CPUs for a "decade" and they made the CPU market competitive again. We can thank AMD also for putting multiple cores affordably on the market while Intel was telling us "you don't need them".But AMD Fanboys will ignore the data.
And every other YouTube channel and media outlet will say but amd's 3D chips will be coming out. "Wait".
That's all everyone says when Intel beats AMD is to wait for AMD stuff to come out. I know because I've been watching the show for decades.
Even when AMD chips were crap people kept complaining about the price of Intel and we're telling people to buy the bargain. I listened unfortunately and missed out on a decade of great Intel chips. All the while I had to keep buying a new AMD chip to keep up. Therefore, did I really save money?
The older Intel chips we know now people were and are still effectively using 10 years running.
Meh....
We put the Core i9-13900K and Ryzen 9 7950X through a six-round fight to see who comes out on top.
AMD Ryzen 9 7950X vs Intel Core i9-13900K Face Off : Read more
Its A Funny things that monolitic 10nm chip win price and performance over chiplet 5nm + 6nm
Except this article is not consideting the price drops AMD announced. A 7950X is now only 574 not 699...for example. This article needs a minor update ASAP over wrong pricing... as this pricing difference changes the outcome considerably imho.
At the time of publishing, the article mentions that in three different sections:
1.) Features and Specification
2.) Pricing
3.) Conclusion
I even put it in italics in the first section so you wouldn't miss it 😛
That 32MB cache helps single core a ton, more than more cores would for sure. Look at the 300-500 series improvement where most of that was going from having 16MB L3 available per core to 32MB L3 available per core due to CCD rearrangement. Also the low cache Ryzen mobile/G performs close to Skylake arch in games while Ryzen desktop with more cache is way past that.Its A Funny things that monolitic 10nm chip win price and performance over chiplet 5nm + 6nm that go faster clockspeed, more transistor counts, and of course lower cost production.
AMD need to redesign its CPU to make better price per performance ratio to fight overall cost of the system. its not a good idea to upgrade with spending 100% more money ( Mobo and DDR5 + ryzen chip) and just get +-35% performance ( 13% IPC and 25% clock speed). They need to make higher performance CPU with same price point. I think they can do with make single CCD consist of 16 cores and replace RDNA2 with L3 chace on IOD.
Multiple CCD makes more latency, and i think AMD NEED TO make one CCD with 16 cores whithout L3 chace that will make benchmark faster in all games and application. Some games run faster in one CCD than two CCD with higher number of Cores. And like AMD have done in rx7900 series,thet was placing L3 in IOD. Because placing 32 MB of L3 chace in an expensive 5nm silicon CCD is VERY BAD idea. Remove RDNA 2 IGP on IOD and replace it with two block of L3 chaces is good idea. Integrated graphic only needed in LOW BUDGET and OFFICE System that will use only cheap DDR4 and cheap Mobo. Or if they want to get more advanced still use ryzen 7xxx with RDNA2 IGP. Mainstream and high end will only use Discrete PCIE graphic. Also enable two more cores in ryzen 5 (8600X or somewhat) to compete with intelwho has more e-cores (6P + 4E) inside core-i5 (a number of e-core in turbo mode can beat more than one P-cores). Ryzen 5 8600X (or whatever) will have 8 cores and 105 Watt and half of L3 chace enabled at IOD. And ryzen 7 8700 XTX(or X3D or something) will have same core counts (8 cores) but doubled L3 chace size (all L3 block enabled at IOD) and also higher 170W of power envelope. Ryzen 9 8990XTX (or anything that will they say) will enables all 16 cores and all L3 chaces block at maximum 170W. I think that will be a good ideas to make better price per performance value choices combine with insane Mobo And DDR5 prices, then people get more valuable choices.
Its A Funny things that monolitic 10nm chip win price and performance over chiplet 5nm + 6nm that go faster clockspeed, more transistor counts, and of course lower cost production.
AMD need to redesign its CPU to make better price per performance ratio to fight overall cost of the system. its not a good idea to upgrade with spending 100% more money ( Mobo and DDR5 + ryzen chip) and just get +-35% performance ( 13% IPC and 25% clock speed). They need to make higher performance CPU with same price point. I think they can do with make single CCD consist of 16 cores and replace RDNA2 with L3 chace on IOD.
Multiple CCD makes more latency, and i think AMD NEED TO make one CCD with 16 cores whithout L3 chace that will make benchmark faster in all games and application. Some games run faster in one CCD than two CCD with higher number of Cores. And like AMD have done in rx7900 series,thet was placing L3 in IOD. Because placing 32 MB of L3 chace in an expensive 5nm silicon CCD is VERY BAD idea. Remove RDNA 2 IGP on IOD and replace it with two block of L3 chaces is good idea. Integrated graphic only needed in LOW BUDGET and OFFICE System that will use only cheap DDR4 and cheap Mobo. Or if they want to get more advanced still use ryzen 7xxx with RDNA2 IGP. Mainstream and high end will only use Discrete PCIE graphic. Also enable two more cores in ryzen 5 (8600X or somewhat) to compete with intelwho has more e-cores (6P + 4E) inside core-i5 (a number of e-core in turbo mode can beat more than one P-cores). Ryzen 5 8600X (or whatever) will have 8 cores and 105 Watt and half of L3 chace enabled at IOD. And ryzen 7 8700 XTX(or X3D or something) will have same core counts (8 cores) but doubled L3 chace size (all L3 block enabled at IOD) and also higher 170W of power envelope. Ryzen 9 8990XTX (or anything that will they say) will enables all 16 cores and all L3 chaces block at maximum 170W. I think that will be a good ideas to make better price per performance value choices combine with insane Mobo And DDR5 prices, then people get more valuable choices.
It was nice to upgrade 3 different CPU generations on one motherboard.But AMD Fanboys will ignore the data.
And every other YouTube channel and media outlet will say but amd's 3D chips will be coming out. "Wait".
That's all everyone says when Intel beats AMD is to wait for AMD stuff to come out. I know because I've been watching the show for decades.
Even when AMD chips were crap people kept complaining about the price of Intel and we're telling people to buy the bargain. I listened unfortunately and missed out on a decade of great Intel chips. All the while I had to keep buying a new AMD chip to keep up. Therefore, did I really save money?
The older Intel chips we know now people were and are still effectively using 10 years running.
Meh....
Having to upgrade CPUs every year is a new AMD thing.Let me inform you of why.
First, Intel shareholders are very frustrated right now because profit is significantly down since alderlake came out because margins have been decreased significantly in order to price their processors at their respective price points to stave off market loss to AMD. Intel will 100% increase their prices for next generation due to the prospect of shareholders suing Intel for dereliction of duty to their shareholders.
Second, AMD does not have in house fabrication to build their processors. They have to purchase wafers, lithography, and packaging from 3rd party companies that demand healthy margins for their services. That is the main reason why AMD pursued chiplet based products, in order to partially counter the added costs of 3rd party manufacturing.
Third, AMD did not implement compatibility with both ddr4 and ddr5 because their socket generations last 2.5 times longer than intel’s. Next generation, Intel will be ddr5 only when they switch sockets for the next 2 year cycle, whereas AMD’s design philosophy includes a socket with a 5 year lifespan and AMD purchasers would be very angry that their 5 year relevant motherboards are not compatible with zen 5, 6, etc. because they bought a ddr4 600 series board. From a design perspective, since the am5 socket will last significantly longer than Intel’s 2 year sockets, it makes no sense to include ddr4 as an end of line motherboard design when their marketing pushes “buy one motherboard and have 5 years of easy cpu upgrades on each of their sockets.
If you do an actual cost analysis between Intel and AMD, AMD is cheaper in the long run to stay up to date (IE: AMD=buy one motherboard, set of ddr5, and cpu then buy the last generation of cpu compatible with am5 to stay relevant, whereas Intel = buy raptor lake, set of ddr4 to save $50-100, ddr4 motherboard to save another $100, then 4 years later having to purchase new gen cpu, new set of ddr5, and a new motherboard) and this scenario is 2 cpu purchases on AM5, if you are a power user and anticipate upgrading at each new generation it’s one motherboard, one set of ddr5, and 4 CPU’s for AMD versus 3 motherboards, ddr4 and ddr5, and 4 CPU’s)
Sounds like a ST bottleneck. Do you have one or 2 threads at 100% and a lot not doing much? Most games still have a primary thread even if they are multithreaded. And of course most won't be able to fully use 32 threads, but still you should be able to max that 4090 at 4k, 1440p, maybe 1080p. I only have a 3080, 1/2 yours, but it maxes out sub 720.You really can't loose with either or. AMDs new socket is good for some years and Intel's 13 series socket is dead as of now. What I've personally found was a CPU aka thread bottleneck with having 32 threads on my 7950x and my RTX4090. So many damn games are stuck at 60% CPU load and 50-60% GPU loads. We need the game developers to really step up the game with better game engine support for these new CPUs. Granted I can max everything out at 4k but I could get way better optimization performance then what's currently available. This is not Windows issue, if the game developer has the game configured for 8 threads max optimization then Windows can't pull a rabbit out of its ass. UE4 is good for 16 threads if the dev's care to optimize for it and UE5 is good for 128 threads if they also care to optimize for it.
Only time will tell.... Happy Gaming!
Plus side is my Future proofed
The Intel one does use more power in heavy use. But to be relevant you need to look at how you will be using the chip for your "8 hours a day".If anyone cares I did some math regarding power consumption and cost. If you use your new shinny computer 8 hours a day for a year then the cost of that electricity needs to be considered. Anyone spending the money on these chips is probably going to use that computer many hours per day. The Intel chip uses 31 Watts more power in heavy use. Based on the assumptions that would work out to ~$20/Year in cost. (8hours/day of heavy use, 15c per kWatt hour) Anybody with more time want to break this down better?
Lets not get into my Fan boy comments. Both Intel and AMD released beasts
But AMD Fanboys will ignore the data.
And every other YouTube channel and media outlet will say but amd's 3D chips will be coming out. "Wait".
That's all everyone says when Intel beats AMD is to wait for AMD stuff to come out. I know because I've been watching the show for decades.
Even when AMD chips were crap people kept complaining about the price of Intel and we're telling people to buy the bargain. I listened unfortunately and missed out on a decade of great Intel chips. All the while I had to keep buying a new AMD chip to keep up. Therefore, did I really save money?
The older Intel chips we know now people were and are still effectively using 10 years running.
Meh....
FWIW, I'm still rocking my i7-7600 non-K. Only now when paired with an RX 6800 XT is it being a bottleneck, and not that bad. What is bad is that I'm busy converting a large video library to the AV1 codec with my ARC A750 (I have two cards in my system,) and the HUGE bottleneck in Handbrake appears to be the CPU, not the A750.The older Intel chips we know now people were and are still effectively using 10 years running.