News Core i9-13900K Early Review Shows Big Gains Over Core i9-12900K

Performance hasn't been this close since the Athlon XP vs Pentium 4 'Northwood' days. I'm excited to see it a second time in my lifetime between these two companies
I like that both sides have omph, but completely different. It means that we will see prices that are very nice for us, and we will be able to specialize each side
I'm only worried that on low power, AMD will absolutely destroy intel machines, so on AMD 35W and under prices will be high and availability will be low.
Still it seems like AMD will give us around 60% on similar 5xxx/6xxx laptop chips, so I am very happy about it, I just wish intel laptops would keep up, so prices won't skyrocket.
 
  • Like
Reactions: artk2219

watzupken

Reputable
Mar 16, 2020
1,007
505
6,070
Results of next gen CPU will be out in no time. If Intel can barely beat AMD, it just shows that they have lost any competitive advantage, be it from an architecture standpoint and also from a fab standpoint. The little core spamming is basically what is saving Intel for now. But it's not a trick that they can use forever before AMD pull the same trick. While I am an Alder Lake user myself, I am skeptical that Intel can use the same 10nm while drastically increasing every aspect of Raptor Lake over Alder Lake.
 
  • Like
Reactions: artk2219
I like that both sides have omph, but completely different. It means that we will see prices that are very nice for us, and we will be able to specialize each side
I'm only worried that on low power, AMD will absolutely destroy intel machines, so on AMD 35W and under prices will be high and availability will be low.
Still it seems like AMD will give us around 60% on similar 5xxx/6xxx laptop chips, so I am very happy about it, I just wish intel laptops would keep up, so prices won't skyrocket.
On a 35W system CPU compute becomes much less useful, how many people need cinebench or other heavily multithreaded apps on a 35W laptop?
Intel with quicksync gives the users a lot of hours of video playback at extremely low power or photo editing thanks to the GPU that has AI, things people actually use on a 35W laptop.
If AMD could keep up with that, then maybe we would get better prices.
Results of next gen CPU will be out in no time. If Intel can barely beat AMD, it just shows that they have lost any competitive advantage
??Because you believe that companies operate on e-peen??
The only thing intel wants is to make sales and the least amount of money they spend on doing these sales the more money they make.
Barely beating the competition is the best way to maximize profit.
 
If this is true, if Intel really did mitigate the E-core performance deficit in games, and now the E-cores actually benefit gaming and other workloads, across the board, then this next gen is shaping up to be a spectacular performer!
They are not going to benefit anything that's not capable of using that many cores, for gaming especially there barely is any difference from a current quad and upwards other than cache maybe.
This is just going to stop hurting performance so maybe now some people are going to shut up about e-cores hurting performance in some cases, although that is doubtful as well.

The thread director will still be the same so there will still be occasions where games will be placed on the e-cores by mistake, it's only the lag that occurs on the p-cores when e-cores are loaded that is fixed.
 

criticaloftom

Prominent
Jun 14, 2022
26
10
535
Personally i've never been comfortable with the whole concept of mismatched cores on a die and as for performance they'll hype it up after probably a 1 or 2 percent gain over AMD when the dust settles that for aside of it being basically nothing comes at a massive energy cost which renders it an irrational comparison.
Intel needs to work on getting their hot power hungry chips on a leash (efficiency) if it wants consumers to get on board with it's products with the direction the market is going.
 
They are not going to benefit anything that's not capable of using that many cores, for gaming especially there barely is any difference from a current quad and upwards other than cache maybe.
This is just going to stop hurting performance so maybe now some people are going to shut up about e-cores hurting performance in some cases, although that is doubtful as well.

The thread director will still be the same so there will still be occasions where games will be placed on the e-cores by mistake, it's only the lag that occurs on the p-cores when e-cores are loaded that is fixed.
True. But the E-cores in the 12th gen CPU actually hurt performance in many cases, not just some.

In an ideal scenario, while the P-cores are running a game at full tile, the E-cores would handle only those background windows tasks that they could accomplish without slowing down the rest of the CPU. Unfortunately, the Thread Director may be 13th gen's achilles heel in this instance but, we'll see.

Edit - Also, credit where credit is due. If any chip company, Intel, AMD, NVIDIA, etc. slapped some 'poop on a wafer' and tried to pass it off as 'GREAT!' [with jazz hands] I would call them out on it. I'm not against Intel, specifically. I usually go back and forth between Intel and AMD every few generations. I'm against false marketing and advertising, and deceiving the general public.
 
Last edited:
...
Intel needs to work on getting their hot power hungry chips on a leash (efficiency) if it wants consumers to get on board with it's products with the direction the market is going.
With zen 4 going up to 230W ppt, the 250W of the 13900k won't look like any kind of difference anymore...
If you want high end performance you have to use up some watts, it doesn't work any other way and zen 4 is the biggest proof of that, no matter how efficient your design is if you want high (compute) power you need high power.
https://www.tomshardware.com/news/a...zen-7000-power-specs-230w-peak-power-170w-tdp
 
  • Like
Reactions: artk2219 and KyaraM

JamesJones44

Reputable
Jan 22, 2021
620
560
5,760
Results of next gen CPU will be out in no time. If Intel can barely beat AMD, it just shows that they have lost any competitive advantage, be it from an architecture standpoint and also from a fab standpoint. The little core spamming is basically what is saving Intel for now. But it's not a trick that they can use forever before AMD pull the same trick. While I am an Alder Lake user myself, I am skeptical that Intel can use the same 10nm while drastically increasing every aspect of Raptor Lake over Alder Lake.


It worked for years with Intel's Tick-Tock. CPU design is just as important for improved performance as die shrinks are, especially in an era where 1 to 2 NMs is considered a large die shrink.
 
  • Like
Reactions: artk2219

JamesJones44

Reputable
Jan 22, 2021
620
560
5,760
Personally i've never been comfortable with the whole concept of mismatched cores on a die and as for performance they'll hype it up after probably a 1 or 2 percent gain over AMD when the dust settles that for aside of it being basically nothing comes at a massive energy cost which renders it an irrational comparison.
Intel needs to work on getting their hot power hungry chips on a leash (efficiency) if it wants consumers to get on board with it's products with the direction the market is going.

I make no excuses for Intel's peak power usage, it has been very high for very long. However, the average person comes no where near using as much power as is advertised in benchmarks. Most people use 1 to 4 cores in their daily workload, if you look at any realistic usage of CPU watts for either Intel or AMD they are usually in the 50 to 100 watt range on desktop. There are 1000s of posts and videos of people showing CPU wattage during games, browsing the web (which is a light workload anyway, not sure why it's even a metric these days), watching video, streaming music, word processing, etc and the CPU isn't even breaking a sweat (aka no where near benchmark wattage). Laptop is a different story, but given the spike in Zen4 power usage, my guess is Intel and AMD will likely be neck and neck there for the first time in a long time.
 
However, the average person comes no where near using as much power as is advertised in benchmarks.
This is very true. A Core i5 is actually overkill for "average" home & office use (web browsing, e-mail, and a bit of MS Office) these days, and gaming has largely been GPU-limited for a fair while (although CPU utilisation will increase over time as a given system ages). There are a few home/office tasks that can absolutely hammer the CPU, but they're definitely in the minority (batch editing of camera raw images is one).
 
  • Like
Reactions: artk2219
Jul 7, 2022
553
531
1,760
With zen 4 going up to 230W ppt, the 250W of the 13900k won't look like any kind of difference anymore...
If you want high end performance you have to use up some watts, it doesn't work any other way and zen 4 is the biggest proof of that, no matter how efficient your design is if you want high (compute) power you need high power.
https://www.tomshardware.com/news/a...zen-7000-power-specs-230w-peak-power-170w-tdp
Except zen 4 7950x will have 16 full performance cores crunching 2 threads per core vs intel’s 8 full cores + 16 useless cores. No desktop needs 16 e-cores to handle background tasks. At most you need 4. The other 12 are there to pump up benchmark scores.
 
  • Like
Reactions: artk2219
Except zen 4 7950x will have 16 full performance cores crunching 2 threads per core vs intel’s 8 full cores + 16 useless cores. No desktop needs 16 e-cores to handle background tasks. At most you need 4. The other 12 are there to pump up benchmark scores.
No desktop needs 16 full performance cores either....At most you need 4. The other 12 are there to pump up benchmark scores.
If you can use 16 full cores that have 32 threads, then you can also use 24 mixed cores that have 32 threads.
 
  • Like
Reactions: artk2219
Jul 7, 2022
553
531
1,760
No desktop needs 16 full performance cores either....At most you need 4. The other 12 are there to pump up benchmark scores.
If you can use 16 full cores that have 32 threads, then you can also use 24 mixed cores that have 32 threads.
I can name a lot of desktop applications that utilize 16 full performance cores. I’ve used alderlake and the p to e core coherency and p to e inter-core latency in many mission critical workloads is problematic to say the least.
 
  • Like
Reactions: artk2219
I can name a lot of desktop applications that utilize 16 full performance cores.
But only the 16 cores of the 5950x ??? Name two.
I’ve used alderlake and the p to e core coherency and p to e inter-core latency in many mission critical workloads is problematic to say the least.
The 5950x and the 7950x will have two ccx so they have inter-core lag just as much and if you look at the clocks of the 5950x core coherency is just as bad on the 5950x with single core clocks being about 33% faster then when all 16 cores are running.
 

shady28

Distinguished
Jan 29, 2007
425
297
19,090
True. But the E-cores in the 12th gen CPU actually hurt performance in many cases, not just some.

<snip>


They greatly reduced the core-core latency where the e-cores are concerned. The ring bus on Alder Lake was crippled due to those e-cores, and you couldn't clock that up because of the very low clocks on the e-cores. That appears fixed now, along with adding more cache the e-cores are clocking 20% higher (4.7 vs 3.9 Ghz). That means the ring can go faster, which means the P-cores can be fed faster.

It should provide Raptor Lake with a nice boost beyond just the frequency increases for applications that are latency sensitive.
 
They greatly reduced the core-core latency where the e-cores are concerned. The ring bus on Alder Lake was crippled due to those e-cores, and you couldn't clock that up because of the very low clocks on the e-cores. That appears fixed now, along with adding more cache the e-cores are clocking 20% higher (4.7 vs 3.9 Ghz). That means the ring can go faster, which means the P-cores can be fed faster.

It should provide Raptor Lake with a nice boost beyond just the frequency increases for applications that are latency sensitive.
Yup, I've read about the reported benefits. If they really did fix the P-core performance deficit (when E-cores are active) this is great news for Intel!
We'll know in about 5 weeks, when a half dozen reputable reviewers get their hands on Raptor chips. I'm still going AMD on my new build this winter but, kudos for great CPU wars (and the price drops next year)!
 
But only the 16 cores of the 5950x ??? Name two.

The 5950x and the 7950x will have two ccx so they have inter-core lag just as much and if you look at the clocks of the 5950x core coherency is just as bad on the 5950x with single core clocks being about 33% faster then when all 16 cores are running.
Solidworks and Autodesk Revit to answer your first Q. We can also throw in virtual machine hosts and SQL. There are many, many others once we get into modeling and scientific applications but you'll have to do your own research.

Regarding the two CCXs - this is true. There is definitely an extra latency introduced with the IF.

However, the major difference between AMD's IF and Intel's E-cores is that, on the other side of the IF, you have more P-cores that are just as powerful as the first CCX. What do you get on the other side of the latency hit introduced by Intel's E-cores?? Eunuched cores (E-cores) that only act as an anchor to the P-cores much of the time. Again, we'll see if Intel corrected the issue with the Raptor chips in about 5 weeks.