News Intel Panther Lake processors could pack up to 16 cores, maximum of four performance cores according to leak

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
How about trying limit both to say, 45W and compare? you run one basically at it's 142W to 125 and 13700k is 253W stock, both likely runs on the more inefficient range of the cores to gain more performance at "acceptable" power draw, so you are only doing mental gymnastics to try to get some weird fake fair comparison

WHy wouldn't one compare thread counts? it is precisely how the cores/threads a architecture can handle work at specific power, AKA efficieny
The most efficient chip computerbase has ever tested was the 13900 limited to 45w. It beat every other chip at 45w. That was of course before the ryzen 5 launch.

First of all because if 1 core from amd costs 5$ and 1 core from intel cost 1$, why would I compare the two? Ill compare 1vs5 cores since they cost the same

But my question still stands, why does a "power hungry" chip beat a non power hungry chip at same power? Doesnt that mean the 2nd chip is even more power hungry?
 

YSCCC

Commendable
Dec 10, 2022
580
465
1,260
The most efficient chip computerbase has ever tested was the 13900 limited to 45w. It beat every other chip at 45w. That was of course before the ryzen 5 launch.
That's because, it have the most threads back in the day and when power is limited to a point where single core performance is so low, thread count matters, and remember back in the day, AMD still have core parking driver issues which is now rectified.

First of all because if 1 core from amd costs 5$ and 1 core from intel cost 1$, why would I compare the two? Ill compare 1vs5 cores since they cost the same
First of all because that price is highly market dependant, when intel know they lost in effeciency side, they need to sell cheaper and give more cores to win at certain market segment, just like when AMD was the underdog, bargain is always from AMD. For the same logic, Chevrolet is faster than porsche, because the similarly priced Chevrolet Spark and porsche eBike compares so.

But my question still stands, why does a "power hungry" chip beat a non power hungry chip at same power? Doesnt that mean the 2nd chip is even more power hungry?
Coz they cramp more, much more cores and cache for the same price , when you do power limit to a point where the cores are basically tied up, more core/thread count wins, simple logic
 
  • Like
Reactions: bit_user

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
That's because, it have the most threads back in the day and when power is limited to a point where single core performance is so low, thread count matters, and remember back in the day, AMD still have core parking driver issues which is now rectified.
So it's the most efficient. Thanks, that's what im saying.

First of all because that price is highly market dependant, when intel know they lost in effeciency side, they need to sell cheaper and give more cores to win at certain market segment, just like when AMD was the underdog, bargain is always from AMD. For the same logic, Chevrolet is faster than porsche, because the similarly priced Chevrolet Spark and porsche eBike compares so.
The only one that drops prices is AMD though. So according to your logic, they lost in efficiency side so they needed to sell cheaper.


Coz they cramp more, much more cores and cache for the same price , when you do power limit to a point where the cores are basically tied up, more core/thread count wins, simple logic
Great, I agree, that's what makes intel more efficient. That's what ive been saying all along.
 

YSCCC

Commendable
Dec 10, 2022
580
465
1,260
So it's the most efficient. Thanks, that's what im saying.


The only one that drops prices is AMD though. So according to your logic, they lost in efficiency side so they needed to sell cheaper.



Great, I agree, that's what makes intel more efficient. That's what ive been saying all along.
Wrong, it isn't, coz at same thread count, core count, it loses in efficiency, and if you don't lower it to useless power draw, it loses more, so the architecture is a power hungry and inefficient.

And btw at that power setting you can easily get last gen platform which cost a tiny fraction of the cost and not drawing much more power from your electricity bill, no sane person will buy such thing, aka: meaningless test
 
  • Like
Reactions: bit_user

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
Wrong, it isn't, coz at same thread count, core count, it loses in efficiency, and if you don't lower it to useless power draw, it loses more, so the architecture is a power hungry and inefficient.
Correct, cause at same segment intel wins in efficiency. AMD architecture is power hungry and inefficient. If it wasnt, it wouldnt be losing at the same power.

You are basically claiming that the CPU that get's less done at same power is the more efficient one. Sounds crazy.
 

bit_user

Titan
Ambassador
There is a difference between a cpus efficiency and how efficient a cpu is with out of the box settings. I'm clearly talking about the first, you are clearly talking about the latter.
You're talking about ginning up some artificial point of comparison that makes Intel look good. No relevance to real world usage. It's just a ruse to try and distract people from the monstrosity that is the i9-14900K.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
You're talking about ginning up some artificial point of comparison that makes Intel look good. No relevance to real world usage. It's just a ruse to try and distract people from the monstrosity that is the i9-14900K.
How is it artificial? The 7700x ships at 135w. We can do the comparisons there as well if you want. The 13700 will be both faster and more efficient at the same time Obviously, being faster and more efficient makes you the more power hungry part around these areas :love:
 

YSCCC

Commendable
Dec 10, 2022
580
465
1,260
Correct, cause at same segment intel wins in efficiency. AMD architecture is power hungry and inefficient. If it wasnt, it wouldnt be losing at the same power.

You are basically claiming that the CPU that get's less done at same power is the more efficient one. Sounds crazy.
you just try to go around and ignore English, efficiency is by how much you can do given a certain core and thread no. not by price, to compare the architecture, per core/thread per W is basically constant and comparable for an architecture, actual market segment stuffs at random timeframe is not, I can make a ARM chip and said it is more efficient by your logic, coz I sell it at $1, no intel or amd or apple cpu at the segment can outperform it!

If you wanna compare, say randomly from GN: https://gamersnexus.net/cpus/intel-problem-cpu-efficiency-power-consumption at the blender test, 14700k at 91W is way less efficient that a Threadripper 7980X, how holy moly good is TR, RPL just lost out right
 
  • Like
Reactions: bit_user

YSCCC

Commendable
Dec 10, 2022
580
465
1,260
How is it artificial? The 7700x ships at 135w. We can do the comparisons there as well if you want. The 13700 will be both faster and more efficient at the same time Obviously, being faster and more efficient makes you the more power hungry part around these areas :love:
The 13700k ships at 253W, how about testing a Threadripper at 253W?
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
efficiency is by how much you can do given a certain core and thread no.
No, it absolutely is not. That's absolutely wrong. It's work done per energy spent. Core count doesn't come into it.

If you wanna compare, say randomly from GN: https://gamersnexus.net/cpus/intel-problem-cpu-efficiency-power-consumption at the blender test, 14700k at 91W is way less efficient that a Threadripper 7980X, how holy moly good is TR, RPL just lost out right
And it was also the most efficient chip on that graph bar the 4.999$ threadripper. Clearly, the 14700k is a power hungry chip, beating EVERYTHING else in efficiency :love:

The 13700k ships at 253W, how about testing a Threadripper at 253W?
Why would anyone do that? Obviously the threadripper is more efficient. Nobody argues otherwise.
 

bit_user

Titan
Ambassador
When all those threads synchronize frequently, like every frame in a game, you will encounter difficulties.
Here's where I can agree that E-cores present challenges, but it's mostly because threading APIs aren't sufficiently expressive for the game to give the OS guidance on how the threads should be scheduled. Also, because a threading-oriented approach encourages games and other apps to spawn more threads than they should, which will almost certainly have some of them land on E-cores.

However, that's a problem with scheduling and not with the E-cores. Gamers have indeed found that some games run better with E-cores disabled, with hyperthreading disabled, and sometimes with both disabled.

The principal being that you have a hard time determining the size of the workload to give each thread when execution speed may vary from 50% to 100%.
The normal approach to that is just to partition the workload into small enough chunks that it's easy to effectively load-balance.

In my experience, the OS trying to automagically decide stuff without all the relevant data always lead to subpar performances and crappy behavior.
That's why Intel created the Thread Director, but I think it was an imperfect solution and what we really need is to stop treating threads as an all-purpose building block for exploiting concurrency.

The fact is that this e-core business (and c cores from AMD) is a specialized hardware, and as such its use may be useful but in my mind it has also drawbacks, that some people just don't want to understand.
C-cores are mainly just frequency-limited versions of the fullsized cores. If you only involve them when the fullsized cores' clocks get throttled to near the peaks of the C-cores, then they carry no significant deficit (other than less L3 cache).

And laptop sales being 60% higher than desktop play a major role in this "optimization" coming to dektop.
By itself, the desktop market is more than big enough to justify Intel's S-series dies. There's no need for them to compromise their desktop platform, just to cater to high-end laptops.

In fact, I'm pretty sure they sell way fewer HX laptops than they do mainstream desktops, because most people don't want a big, heavy laptop with terrible battery life and a loud cooling fan. That's what you get, when you put their desktop CPUs in a laptop.
 
  • Like
Reactions: thestryker

YSCCC

Commendable
Dec 10, 2022
580
465
1,260
No, it absolutely is not. That's absolutely wrong. It's work done per energy spent. Core count doesn't come into it.


And it was also the most efficient chip on that graph bar the 4.999$ threadripper. Clearly, the 14700k is a power hungry chip, beating EVERYTHING else in efficiency :love:


Why would anyone do that? Obviously the threadripper is more efficient. Nobody argues otherwise.
you are saying ARCHITECTURE, you know what that even means? it is the work per watt it can done, calculated solely by the chip architecture, which, means single core vs single core, dual core vs dual core, if you put more and sell cheaper, it's just market segmentation and not architecture.

Zen4 have
Ryzen, Threadripper and Epyc in the same architecture

If you do not want to compare architecture and wanted to compare product, which, is decided by the vendor with their view on chip config, you do not limit the power, because that is part of what the vendor think it should be
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
I have to manually limit both power hungry chips to 125w.
No, you don't have to limit anything. You can compute perf/W by dividing performance by average power utilization. It's maths.

The 13700 is still 25% faster than the 7700x.
Just a "random" example you picked. It's always weird comparisons and nonstandard settings, with you. The way you test things isn't how anyone using them and the product matchups tend to be equally weird.
 
  • Like
Reactions: Thunder64

YSCCC

Commendable
Dec 10, 2022
580
465
1,260
No, you don't have to limit anything. You can compute perf/W by dividing performance by average power utilization. It's maths.


Just a "random" example you picked. It's always weird comparisons and nonstandard settings, with you. The way you test things isn't how anyone using them and the product matchups tend to be equally weird.
and he apparently don't know threadripper 7000 and ryzen 7000 are same architecture, just different product... so his architecture is specific chip at cherry picked market segment and with his own choice of unrealistic power level where nobody will use those chips at..
 
  • Like
Reactions: bit_user

bit_user

Titan
Ambassador
for built in ram, maybe coz I am old, I am skeptical for it's longevity in mid term, memory IME always partially fail and runs into error before the CPU gets degraded, say I have 1 pair of corsair vengenance and 1 pair of gskill DDR3 died on the Sandy Bridge, not frequent, but each pair died like 5 years on service and some random error in a single chip arise.
This could be addressed, if the on-package memory exposed the ECC errors to the OS, so that it could keep a list of memory pages to remove from circulation. DDR5 already has on-die ECC, so this wouldn't have to come at any additional performance cost. ECC can correct single-bit errors, so the user likely wouldn't even notice. Just rate the memory capacity a small % lower, like they do for SSDs.

And ram, usually degrade quicker at high temp. while DDR5 can sustain 85C,
That's why I like down-draft coolers. They keep my RAM cool, also!

modern CPU runs at.... 100C TJ max, and like running gaming workload, both intel and AMD nowadays runs at some 60C-70C and is considered cool, a ram chip right next to the die don't really sounded like a good idea
Intel's Xeon Max incorporates HBM. Nvidia's Grace CPUs incorporate LPDDR5X. AMD's MI300A includes both CPU and GPU chiplets and up to 128 GB of HBM3. If there are reliability problems with on-package DRAM next to fire-breathing CPU dies, these should products should exhibit poor reliability.
 

bit_user

Titan
Ambassador
The most efficient chip computerbase has ever tested was the 13900 limited to 45w. It beat every other chip at 45w.
Sure, get a big die with lots of cores and clock it low. Except it's silly, because it's still expensive and people don't usually spend so much money on a CPU, just to throw away so much performance by running it under such constraints.

So, your whole argument rests on these artificial points of comparison. Like I said, it's just a trick to distract people from the standard power consumption of these CPUs.
 
  • Like
Reactions: Thunder64

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
you are saying ARCHITECTURE, you know what that even means? it is the work per watt it can done, calculated solely by the chip architecture, which, means single core vs single core, dual core vs dual core, if you put more and sell cheaper, it's just market segmentation and not architecture.
First of all any such comparison is meaningless because intel chips have 2 different types of cores. But since you care about single thread vs single thread, surely you realize that eg. the 13900k is much faster and consumes much less power than the 7950x 1 core vs 1 core. Right?

No, you don't have to limit anything. You can compute perf/W by dividing performance by average power utilization. It's maths.


Just a "random" example you picked. It's always weird comparisons and nonstandard settings, with you. The way you test things isn't how anyone using them and the product matchups tend to be equally weird.
You don't have to limit anything but then you arent really comparing the cpus but the settings. If you don't limit them to same wattage then intel wins by default with the T series being at 35w. That's just silly.

Sure, get a big die with lots of cores and clock it low. Except it's silly, because it's still expensive and people don't usually spend so much money on a CPU, just to throw away so much performance by running it under such constraints.

So, your whole argument rests on these artificial points of comparison. Like I said, it's just a trick to distract people from the standard power consumption of these CPUs.
Let me ask you a question. Please, don't avoid answering it. Let's for the sake of argument agree that Intel chips are super inefficient and hopeless and they don't stand a chance and the whole 9 yards.

If intel for whatever reason decided to lock the 13700k to 125w instead of the unlimited / 253w it runs now, would that mean that they are more efficient than their amd counterpart (the 7700x)? So the only thing that makes then not efficient is the power intel decided to lock them at?
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
The stock PL2 of the i7-13700 is 219 W. You're trying to trick people into believing you're putting the i7-13700 at a disadvantage, when what you're actually doing is forcing it to run more efficiently than stock.

I'm sick of your mind games.
The pl2 lasts for 54 seconds. After those 54 seconds it drops to 65w. It's you whos trying to trick people.

At this point ill say this and ill leave it at that. I got a 125w cooler and 300$ to buy a CPU, buying the 300$ intel chip will get a lot more performance at same power than buying the 300$ amd chip. That's the reality of it. In order for the amd chip to get the same performance as the Intel chip it will require 1kwh and ln2 canisters. But sure, the amd chip is more efficient. You are right.
 

MacZ24

Proper
BANNED
Mar 17, 2024
79
80
110
The normal approach to that is just to partition the workload into small enough chunks that it's easy to effectively load-balance.

C-cores are mainly just frequency-limited versions of the fullsized cores. If you only involve them when the fullsized cores' clocks get throttled to near the peaks of the C-cores, then they carry no significant deficit (other than less L3 cache).

1/ Then you have to synchronize even more.

2/ If that was really the case, then instead of 8 fullsize + 6 limited size, you would rather choose 6 fullsize + 10 limited size, then rather than that you would choose 4 fullsize + 14 limited size.
The fact that this is not the architecture we observe, mean there is a difference, and that you would prefer to use a full size core rather than several limited size ones.
Hence, there are drawbacks using limited size cores.
Hence, it is specialized hardware that is not equivalent or better than using fullsize cores.

It has uses, but you would still prefer to use a fullsize core if possible.
 

TheHerald

Respectable
BANNED
Feb 15, 2024
1,633
502
2,060
Funny thing according to TPU and using your own comparison logic the 9950x is less efficient than the 5950x. 4 years later and amds latest and greatest is less efficient than a 2020 CPU. Talk about stagnation right?
 

bit_user

Titan
Ambassador
The pl2 lasts for 54 seconds. After those 54 seconds it drops to 65w. It's you whos trying to trick people.
That's not accurate, though quite a popular misconception. Tau is indeed a time constant, with units rated in seconds. However, it determines the aspect ratio of an exponential-weighted moving average filter, which is used to smooth out the instantaneous power draw.

PL1 is only enforced when this "average" power exceeds PL1. How long that takes depends entirely on how far above PL1 you go. If you go only a little above PL1, it could boost for quite a bit longer than 54 seconds. I can't say exactly how long, because Intel hasn't published the precise formula they use. However, I have observed this in practice, so I know my understanding of their documentation is accurate.

I got a 125w cooler and 300$ to buy a CPU, buying the 300$ intel chip will get a lot more performance
I'll agree with this part. Right now, the i7-13700 or i7-13700K are compelling deals, assuming Intel has adequately mitigated the degradation issue.

However, it's still just your chosen single point of comparison, and something you like to focus on, in order to distract from the fire-breathing monsters at the top of their lineup.
 
Last edited:
Status
Not open for further replies.