News Intel's next-gen desktop CPUs have leaked — Arrow Lake Core Ultra 200 series share similar core counts with Raptor Lake Refresh

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Obviously choosing only the models that favor Intel is not a good compare.
Look at the graph that you posted and what apps they tested...it only has 5 apps and 100% of those favor AMD, but it is a good compare for you....because it favors amd and not intel?!

Here is the techpowerup result, they test 47 or so apps and both the 13900k and the 14900k beat the 7950x, even at stock settings, it's not by much but still.
relative-performance-cpu.png
 

NinoPino

Prominent
May 26, 2022
230
128
760
Look at the graph that you posted and what apps they tested...it only has 5 apps and 100% of those favor AMD, but it is a good compare for you....because it favors amd and not intel?!

Here is the techpowerup result, they test 47 or so apps and both the 13900k and the 14900k beat the 7950x, even at stock settings, it's not by much but still.
relative-performance-cpu.png
The argument was HT performance so I obviously take the Multithreading graph from Tom's.
As you can read above, I observed that when discussing if it is better HT yes/no on a technical POV, it makes more sense to compare hypothetical future performance with the competitor instead of the 14900 that imho is a failure in a multithreading performance pov.
 
The argument was HT performance so I obviously take the Multithreading graph from Tom's.
As you can read above, I observed that when discussing if it is better HT yes/no on a technical POV, it makes more sense to compare hypothetical future performance with the competitor instead of the 14900 that imho is a failure in a multithreading performance pov.
But even in tom's chart the 7950x is only 2% above the 14900k so if the intel chip is a failure in multi then the ryzen chip is still 98% failure...
A 2% difference is not a difference, the 7950x and the 14900k are equal in multi but the 7950x needs 16 full cores to do it.
 

NinoPino

Prominent
May 26, 2022
230
128
760
But even in tom's chart the 7950x is only 2% above the 14900k so if the intel chip is a failure in multi then the ryzen chip is still 98% failure...
A 2% difference is not a difference, the 7950x and the 14900k are equal in multi but the 7950x needs 16 full cores to do it.
Failure mainly for the power usage and in part also for the stability problems.
 
They're killing hyperthreading because it's a free 15% ipc improvement thanks to all the specter and other exploit fixes over the years. killing off hyperthreading allows them to drop the performance killing exploit protections.
Hyperthreading/symmetric multithreading has NEVER been free. Hyperthreading uses more power and produces more heat. This causes cores with HT enabled to throttle due to hitting their TDP/PPT limits sooner and more frequently.

It looks like Intel has decided that the higher power and temperature thresholds can be better used clocking non-HT cores higher and for longer.
 
Hyperthreading/symmetric multithreading has NEVER been free. Hyperthreading uses more power and produces more heat. This causes cores with HT enabled to throttle due to hitting their TDP/PPT limits sooner and more frequently.
The TDP limit is set with HTT in mind, you do not hit that limit if not both normal cores and SMT cores are fully loaded (unless you are overclocking but intel and amd don't sell CPUs as overclocked) , so you are talking about twice the load on one side compared to the other side...
If you are running something that loads only all normal cores then the core speed will be the same because the tdp has all of that headroom that the HTT is supposed to use.
 

TheHerald

Upstanding
Feb 15, 2024
259
64
260
The argument was HT performance so I obviously take the Multithreading graph from Tom's.
As you can read above, I observed that when discussing if it is better HT yes/no on a technical POV, it makes more sense to compare hypothetical future performance with the competitor instead of the 14900 that imho is a failure in a multithreading performance pov.
But there is no competition in the MT department.

The 8600x will launch at 299$ and will be losing to a 13600k. Intel's new cpu doesn't need to compete with that part, since their 13th (and probably 12th gen) i5's beat it already.

The 8700x will launch at 399$ and will be losing to the 13700k. Again, read the above.

The 8900x will launch at 549$ and will be losing to the 13900k. Again, see above.

Even by turning off HT on 13th gen parts they will still be ahead of zen 5 in MT performance.
 
The TDP limit is set with HTT in mind, you do not hit that limit if not both normal cores and SMT cores are fully loaded (unless you are overclocking but intel and amd don't sell CPUs as overclocked) , so you are talking about twice the load on one side compared to the other side...
Definitely not true.
You can easily hit thermal limits on a single core (or couple/few cores) without all cores being fully loaded. It is also NOT twice the load. These shadow cores are not even close in IPC to the full, real cores. In fact, they are not even real cores. They are just cache fillers for the next bit of work needing to be done.

If you are running something that loads only all normal cores then the core speed will be the same because the tdp has all of that headroom that the HTT is supposed to use.
So, once you bring in different types of workloads into the equation, the results get muddied. There are definitly many scenarios where having HT/SMT enabled is beneficial. As you said, it can be up to 15-20% of a performance benefit in certain scenarios.

The points I wanted to make are that -
1) HT/SMT has NEVER been just free performance. There has always been a thermal, power, (and now) security cost involved.
2) It seems that Intel has crunched the numbers, done the math, and decided that the overhead involved with having HT is no longer beneficial. If these new chips show a major increase in IPC, then being able to clock them higher and for longer (vs having HT enabled) may completely negate any HT benefit.

We'll see when the next gen chips launch. ;)
 

NinoPino

Prominent
May 26, 2022
230
128
760
Yeah, both the power usage and the stability comes from mobo makers overshooting the safety limits and reviewers leaning into it, knowingly using bad settings in their reviews.
Power usage of the CPU comes from mobo ? Not from process node and architecture ?
Every day I discover something new.
 

TheHerald

Upstanding
Feb 15, 2024
259
64
260
Power usage of the CPU comes from mobo ? Not from process node and architecture ?
Every day I discover something new.
Yes, power usage comes 100% from the mobo. The process and the architecture changes nothing. If the mobo decides to give 10 watts to the cpu, that's how much the cpu will consume. If the mobo decides to give 200 watts, that's how much the cpu will consume etc.

If power usage depended on process node and architecture then you couldnt have 2 cpus build on the same node with the same architecture consume completely different amounts of power. Like 14900k can consume up to 350 watts, 14900t consumes 35. How is that possible?
 

NinoPino

Prominent
May 26, 2022
230
128
760
Yes, power usage comes 100% from the mobo. The process and the architecture changes nothing. If the mobo decides to give 10 watts to the cpu, that's how much the cpu will consume. If the mobo decides to give 200 watts, that's how much the cpu will consume etc.

If power usage depended on process node and architecture then you couldnt have 2 cpus build on the same node with the same architecture consume completely different amounts of power. Like 14900k can consume up to 350 watts, 14900t consumes 35. How is that possible?
You leaved me without words.
 
  • Like
Reactions: TheHerald

TheHerald

Upstanding
Feb 15, 2024
259
64
260
Yeah, both the power usage and the stability comes from mobo makers overshooting the safety limits and reviewers leaning into it, knowingly using bad settings in their reviews.
Well, the intel / nvidia hatred is so strong on the internet, you can't reason with fanatics. Intel has - by a large margin - the most efficient cpus on almost every segment in both ST and MT performance - but all youll hear about is "but out of the box".

A lot of reviewers, by their own admission, have to pander to this nonsense, cause videos that don't hate on Intel don't do as well on youtube clicks. Many content creators have mentioned this. Reviewers that keep shi*** on Intel about how terrible it is are running 12900ks and other Intel cpus on their home computers (steve cough cough).
 

TheHerald

Upstanding
Feb 15, 2024
259
64
260
You leaved me without words.
Im sure you can come up with a few words to explain - if power draw is determined by process node and architecture - how come 2 cpus that have identical process node and architecture consume vastly different amounts of power. Leave the intel hatred aside for 10 seconds - if you are capable of such a feat - and provide some evidence for your extraordinary claims. Please?