News Intel Comet Lake-S Arrives: More Cores, Higher Boosts and Power Draw, but Better Pricing

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Comet Lake has in-silicon mitigation for a handful of flaws, so there could be some significant improvements in cases where software mitigation is no longer necessary.
Eh for the end user that was blown out of proportions anyway.

So to push to higher performance, Intel is packing in two more cores and increasing clock speeds, which means power goes up. And if you push all-core overclocks to 5.3GHz or whatever, don't be surprised to see power use jump into the 200+ watt range. Just like the old 5GHz FX-9370. AMD was obviously different in that it didn't add cores -- it just cranked up clocks and power use. SAME NET RESULT: more power, because the market wanted something new.
Yeah but going for 10nm would not change that for intel,they would still have to match core count and push them way high to at least 5Ghz to compete with their own older models,as well as ryzens higher core count models.
 
Core i9-10900K: TDP is 125 W, PL2 is 250 W, Tau is 56 seconds
Core i7-10700K: TDP is 125 W, PL2 is 229 W, Tau is 56 seconds
Core i5-10600K: TDP is 125 W, PL2 is 182 W, Tau is 56 seconds
...
Sounds to me like Intel is going full on FX-9370 with these chips and saying power be damned we want the highest per-core performance possible for this architecture. Difference here is the 10900k has a 3.7GHz base clock vs the FX-9370's 4.7GHz base clock.
Yeah. I'm reminded of the Skylake-X launch a few years back. Core i9-7900X running 'stock' in Asus and Gigabyte boards went bonkers. I remember running Cinebench and seeing power start at 350W for the entire system -- CPU was running at 4.5GHz, the maximum turbo! It caused a cascade where the cooler couldn't dissipate the heat fast enough, so temps started at maybe 70C but then started to increase toward 100C.

As the temperature increased, the power increased, and that caused the temperature to increase even more. I watched the outlet power on a Kill-A-Watt scale from 350W up to 450W before the whole system just crashed. A BIOS update did come out and fix things a bit, but even then the all-core turbo values were aggressive -- pretty sure I was seeing 4.0GHz on all cores on some boards, and maybe higher on others.

Even today, Cascade Lake-X CPUs are going FAR above their 165W TDP in a lot of boards. I tested on an Asus Rampage VI Extreme Encore and saw 335W system power draw in H.265 encoding. If memory serves, power use was significantly higher in some other apps (like Cinebench).
 

InvalidError

Titan
Moderator
Yeah but going for 10nm would not change that for intel,they would still have to match core count and push them way high to at least 5Ghz to compete with their own older models,as well as ryzens higher core count models.
No need to push frequency "way high" to match existing chips, Intel can increase IPC too. Unless you have been living under a rock, you should know that Ice Lake (Sunny Cove) brings fairly sizable IPC bumps and Intel is two more architectures ahead of that (Willow Cove and Golden Cove) which it couldn't put into production yet due to 10nm hiccups.

if Intel could finally sort out its remaining 10nm performance issues, it could leapfrog Zen 3. However, the lack of any plans for mainstream 10nm parts in any roadmaps that I am aware of strongly implies that Intel has no intention of ramping up 10nm capacity to support launches of any such parts.

I'm not expecting anything particularly exciting for the mainstream until 2021 where Intel may launch its first generation of 7nm chips based on Golden Cove cores.
 
No need to push frequency "way high" to match existing chips, Intel can increase IPC too. Unless you have been living under a rock, you should know that Ice Lake (Sunny Cove) brings fairly sizable IPC bumps and Intel is two more architectures ahead of that (Willow Cove and Golden Cove) which it couldn't put into production yet due to 10nm hiccups.
Yes it does,in cinebench and archivers and stuff like that,we can see from ryzen and how intel still keeps selling like hotcakes how many people care about that,if it doesn't match the clock speeds it's going to be slower in serial stuff and games and it will be less desirable for the normal target audience.
 
Have not progressed that much in 5 years? For a similar price to an i7-6700k, you will be able to get a i7-10700k with double the cores, double the threads, higher clocks, more cache... I mean the benchmarks I am linking below speak for themselves. Since the i7-10700k is not yet available to benchmark, I used the i9-9900k since it has the same core/thread count. I mean look at the multi-threaded tests! Well beyond double the performance for the same price. And that is still while being stuck on 14nm and on the same arch. If you told me in 2015 that Intel would more than double its multithreaded CPU performance while on the same node and same architecture and same price point there's no way I'd believe you. Comparing these gains with BD/PD is ridiculous.
https://www.anandtech.com/bench/product/2260?vs=2263

Now let's go back to 2011 during Intel's big heyday and compare the i7-2600k with the i7-6700k (it's roughly the same amount of time). Single thread performance improved nicely, but look at the multi-threaded tests. Far less than double the performance gains. Pretty skimpy actually. So to summarize, Intel has actually gained more multi-thread performance in the last five years at the same price point while mired on the same node and same architecture than they did in 2011-2015 when they went from 32nm to 22nm to 14nm and three different uArchs (SB, Haswell, Skylake). Saying that they "have not progressed that much in 5 years" is deceptively ignoring multi-threaded performance at the same price point.
https://www.anandtech.com/bench/product/2413?vs=2260
My point is that they've more than doubled the core counts, but also potentially doubled the power use and die size. Basically, take the 2015 Skylake architecture and scale it up in core counts. That's pretty much all that Intel has done in five years. Tweaks and improvements to the process technology, adjust some things to allow for more cores, tweak and improve the Gen9 graphics (mostly improved video codec support in Gen9.5). It's the same architecture and process of five years ago.

So, no, I don't really think that's a major improvement in itself. Broadwell i7-6950X was a 10-core/20-thread CPU back in 2016. Stock speeds were lower, but you could overclock it to 4.2-4.3GHz without too much difficulty (with a good cooler). Yes, 10900K will invariably be quite a bit better than a 6950X, and it will cost one third the price at launch (less, actually). But it's only because of AMD and Ryzen / Threadripper that Intel is taking this approach.

Scaling up core counts in direct proportion to power and die size is the expected behavior. If a company has a 4-core chip that uses 50W at 4GHz and then releases an 8-core chip that uses 100W at 4GHz, and it's also twice the size, that's just 'simple' scaling. An impressive change would be if Intel could deliver more cores in a smaller die using less power and delivering better performance. But to do that, Intel would need 10nm or 7nm.

Again, I reiterate: Skylake was a good architecture in 2015. And when AMD came out with an 8-core competitor to Kaby Lake in 2017, Intel did the sensible thing and went from 4-core to 6-core, then the next year to 8-core, and now two years later 10-core. It's good to get more cores, and the base architecture is still fine, but I'd argue nothing truly innovative has happened on Intel CPUs since Skylake.
 

larkspur

Distinguished
But it's only because of AMD and Ryzen / Threadripper that Intel is taking this approach.
More performance per dollar is how I judge progress in silicon and the i7-10700k doubles threads at the same price point as its predecessor - that is excellent. Perf per dollar is where Ryzen really shines. Starting with Coffee Lake, Intel finally started to improve that. Prior to that? Meh... such small, incremental gains. Boy am I glad AMD finally got away from BD!
 
Seems once you get over 95% of the cpu desktop market, you basically as a bussniess, don't give a shirt. As long as people keep buying your product at whatever price you wana tag them, and from generation A to generation B you can atleast add like ~10% more perfomance in some workloads, thats seems more than enough. And it was for intel, for some time.

Problem is +3 years ago AMD finally had a product that was a huge jump agaisnt its own previous architecture. In fact it was more than a jump in performance, it was a "new/old" (in the sense of the desktop market) tech paradigm: Lets stop fighting hz vs hz if we can not win, and focus on what we can achieve better (remember the first Athlon). They added more cores, free SMT (HT) and OC capabilities too all thier first gen Ryzen chips, and as the final touch of an elaborated plan, they choosed to go with a new platform (this was really necessary back then) that could (and soo far did) last for atleast 3~4 years. On top of that they added a very decent stock cooler for all thier parts, yeah all of them, even the high end ones.

What did Intel do?, Well as intel usually does, they launched a new platform that most likely wasn't really need it, and cpus to try show like they cared to compete in this new "core" battle, and so it began the 14nm+++ race.
At first Intel was always in the lead, gaming wise, but when zen+ showed up things started to become a bit diferent. Some reviews back in 2018 were already saying, think twice before buying a Core i5, you may want to get a Ryzen 5 instead, problem was only those buyers with a tight budget choosed the last one. And lets face it the Core i5 in 2018 was still the midrange king in most gaming situations. 2019 was when things really started to change, and by July, when Ryzen 3rd gen launched, it was the begining of a new era.

Back to intel, so what happend when you keep working on the same thing, over, and over, and over...., and over again?, well you become a master, and perfection is almost at reach. And this is what happend they have really learned how to extract everything the 14nm node have to offer (at the cost of high power draw).
And this is my personal opinion now: they realized many, many months ago the jump in performance from "14nm++++ perfection" to 10nm "1st gen" was not worth it at all. So they came with the "10nm supply problem" to avoid saying "you know what, we wont go there cause, lets face it, wont really change a thing".

In fact if you think about it, for AMD everything started when they got the playstation and xbox partnership to supply the cpu for thier consoles. Intel may have had 95% of the desktop market, but back then (and soon again) AMD will still have 90% of the console CPU market, and I believe that was the baseline that alowed AMD to get to where it is now.

I really hope they both can keep competing, cause the only winner with that is us, the consumer.
 
Last edited:
Throughput is IPC x clock. If the clock goes down by 10% but IPC goes up 30%, you are still 20% ahead.
Not everything is based on throughput,hence serial workloads,increasing the IPC as in adding more available parallel instructions is going to help cinebench and zip because they can use them,but do you see that one line where only one of the 4 instructions is being used? Even if this core were to be 6 instructions wide for a 33% (or more like 30) increase in cinebench that one single instruction would still need the same clocks to be executed at the same speed,and there are programs and games were every single cycle the amount of instructions is less than what intel has now so adding more will not help them.
(which was the base idea for bulldozer,since there are a lot of things that only use a few instructions per cycle let's make a CPU with only a few of them per core)
tjV80sV.jpg
 

InvalidError

Titan
Moderator
Even if this core were to be 6 instructions wide for a 33% (or more like 30) increase in cinebench that one single instruction would still need the same clocks to be executed at the same speed,and there are programs and games were every single cycle the amount of instructions is less than what intel has now so adding more will not help them.
There are almost no real-world algorithms where you won't have at least three different concurrent instructions involved: your main operation, a loop counter or other similar condition and a load/store/AGU which are all located on different subsets of execution ports in duplicate or even triplicate on newer chips to ensure they hardly ever get in each other's way especially on modern 8-10 issues-wide CPUs. The only workloads that may struggle to achieve an average IPC of at least 3 would be densely packed highly unpredictable branches.

That slide from 2012 is purely academic: it illustrates a hypothetical 4-wide architecture while Sandy/Ivy Bridge which were current at the time were already 6-wide. Broadwell bumped that up to 8-wide and Sunny Cove (Ice Lake) raised it again to 10 with each load/store unit getting a dedicated AGU.

The main reason why SMT rarely delivers more than 30% extra performance in heavily threaded workloads is simply that the scheduler is already able to keep 60+% of the cores' execution resources busy with a single thread most of the time. All SMT can do is attempt to fill the remaining 30-40% more efficiently.
 

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
No PCIe 4.0 and still 16 lanes only ... I was expecting intel to make this chip at least 40 lanes PCI3 3.0 16x , 16x , 8x

to make it worth it against AMD 20 total lanes of PCIe 4.0 which count as 40 PCIe 3.0 in bandwidth.

but Again , Intel is not smart enough
 

InvalidError

Titan
Moderator
No PCIe 4.0 and still 16 lanes only ... I was expecting intel to make this chip at least 40 lanes PCI3 3.0 16x , 16x , 8x

to make it worth it against AMD 20 total lanes of PCIe 4.0 which count as 40 PCIe 3.0 in bandwidth.
To add 24 PCIe lanes, the socket would have needed 100+ extra pins: 72 extra pins for signals (RX +/-, TX+/- for each lane), a couple of extra pins for PCIe IO power and the rest for signal integrity grounds guarding IO pins. Add handfuls of extra pins for Vcore/gnd and you'd be looking at LGA1300+ instead.

LGA1200 does have pins for 20 PCIe lanes but Comet Lake-S only has 16, you'll need a Rocket Lake CPU to use whatever slot the extra x4 might be routed to on a given board.
 

shady28

Distinguished
Jan 29, 2007
428
298
19,090
So much talk about tech specs.

I will just leave this. This is how it stands today, right now, if for some reason you feel the need to build the fastest system you can lay hands on. These are the top recorded scores in PCMark 10 (multiple general purpose use cases), FireStrike Extreme, and Time Spy Extreme.

The processors are all made by one company...

j6nXEzq.jpg
 
So much talk about tech specs.

I will just leave this. This is how it stands today, right now, if for some reason you feel the need to build the fastest system you can lay hands on. These are the top recorded scores in PCMark 10 (multiple general purpose use cases), FireStrike Extreme, and Time Spy Extreme.

The processors are all made by one company...

j6nXEzq.jpg


Its very true, and thats just fine. Everything you see in those pics is thanks to extreme Oc condition and ultra highend gear. It only represent like ~10% of the market. Most "normal" users with similar systems (they do exist) wont never touch liquid nitrogen.

Intel is the king of gaming (in most cases), and I think its fine and good for us consumers to have now a competing product.
 

nofanneeded

Respectable
Sep 29, 2019
1,541
251
2,090
To add 24 PCIe lanes, the socket would have needed 100+ extra pins: 72 extra pins for signals (RX +/-, TX+/- for each lane), a couple of extra pins for PCIe IO power and the rest for signal integrity grounds guarding IO pins. Add handfuls of extra pins for Vcore/gnd and you'd be looking at LGA1300+ instead.

LGA1200 does have pins for 20 PCIe lanes but Comet Lake-S only has 16, you'll need a Rocket Lake CPU to use whatever slot the extra x4 might be routed to on a given board.

so ? they already demanded a new socket for this , it is not like it is the same old socket. ... just stupid strategy from Intel ... they lost the war agains Ryzen 3 and 4 already ...

I can live wothout PCIe Gen4 if they give me more lanes , now they shot themselves in the feet.
 
Again, I reiterate: Skylake was a good architecture in 2015. And when AMD came out with an 8-core competitor to Kaby Lake in 2017, Intel did the sensible thing and went from 4-core to 6-core, then the next year to 8-core, and now two years later 10-core. It's good to get more cores, and the base architecture is still fine, but I'd argue nothing truly innovative has happened on Intel CPUs since Skylake.
I think the point still stands. Comet Lake is still way more competitive than Bulldozer/Piledriver had ever been despite having some similarities. How similar that is a matter of anyone's subjective opinion.

If it doesn't match the clock speeds it's going to be slower in serial stuff and games and it will be less desirable for the normal target audience.
They probably won't use 10nm for Comet Lake's 2021 successor. Intel's 10nm doesn't seem to be able to even achieve 4 GHz boost on 8+ cores at the moment. Heck the highest clocked 10nm Intel CPU that I know is the 1068G7 that doesn't even boost to 4 GHz on all cores. That part was just launched on February and has extremely limited availability. They're probably going to port (or die-enlarge as supposed to die-shrink) Willow Cove/Tiger Lake to 14nm to counter this issue.
 
Last edited:

InvalidError

Titan
Moderator
so ? they already demanded a new socket for this , it is not like it is the same old socket. ... just stupid strategy from Intel ... they lost the war agains Ryzen 3 and 4 already ...
Intel's current chips still beat Ryzen 3 in many games and applications, it is still far too early to call it a loss. Also, Rocket Lake may be launching later this year, which will bring IO pretty much on par with the best AM4 will have to offer for the remainder of its useful life.

AMD's mainstream is stuck on AM4 for another year while Intel is almost certain to introduce yet another new socket in 2021 if it launches any new desktop CPUs. If Intel feels remotely threatened by being only at parity with AM4's IOs, it can bump it up again then. Not really expecting more than perhaps another x4 to support either direct dual NVMe for RAID0/1 or NVMe + PCIe x4 slot.

I can live wothout PCIe Gen4 if they give me more lanes
If you want more lanes, buy HEDT. There aren't enough people in the mainstream needing a buttload of IO for AMD and Intel to justify drastically increasing the number of CPU and chipset PCIe lanes on cost-conscious platforms.
 
  • Like
Reactions: Makaveli

ubronan

Distinguished
May 13, 2009
42
5
18,545
Yes o yes more cores and again nothing which does make use of all these cores and that little bit more speed.
LoL the reality is that you really have to spend alot of money before you can actually make use of those so called faster cpu... especially in cooling solutions and again the silicon lottery.
In my country you can not get any good ones because all the shops bin them and sell the good ones to the people who pay alot more for higher clocking ones. There actually are real shops who offer them on the exact speed you want them to achieve.
But fact is that here in my country you probably do not find any good one at all, i have bought several trays of 6700k and tested them all... non did run higher than max 200 to 300 mhz higher. So no fabled high clocking beast at all. Yes from different shops and ended up with a huge amount of standard clocking cpu :(
 

shady28

Distinguished
Jan 29, 2007
428
298
19,090
Its very true, and thats just fine. Everything you see in those pics is thanks to extreme Oc condition and ultra highend gear. It only represent like ~10% of the market. Most "normal" users with similar systems (they do exist) wont never touch liquid nitrogen.

Intel is the king of gaming (in most cases), and I think its fine and good for us consumers to have now a competing product.


Yes, but you're implying that there aren't a bunch of OC ultra high end gear 3900X and 3950X type systems. There are. Lots of them.

They just never show up in the top 50 or so, keeping in mind that there are many thousands of tests for these benchmarks. This implies something. The amount of core OC drops off pretty quick after the first 3 or 4 massively overclocked (6Ghz) Intel parts, and still Ryzen is a no-show.

My personal theory is that this comes down to drivers and chipsets. I've owned a few AMD systems in the past, from about 2003 to 2010 (several Athlons, and ended with the FX series) before switching to Intel. I also recently did some hard looking at the 5600XT / 5700 / 5700XT GPUs. I see the same thing with AMD I saw 10-20 years ago. Fast in a benchmark, not very reliable, requires fiddling to get it right.
 

MasterMadBones

Distinguished
Yes, but you're implying that there aren't a bunch of OC ultra high end gear 3900X and 3950X type systems. There are. Lots of them.

They just never show up in the top 50 or so, keeping in mind that there are many thousands of tests for these benchmarks. This implies something. The amount of core OC drops off pretty quick after the first 3 or 4 massively overclocked (6Ghz) Intel parts, and still Ryzen is a no-show.

My personal theory is that this comes down to drivers and chipsets. I've owned a few AMD systems in the past, from about 2003 to 2010 (several Athlons, and ended with the FX series) before switching to Intel. I also recently did some hard looking at the 5600XT / 5700 / 5700XT GPUs. I see the same thing with AMD I saw 10-20 years ago. Fast in a benchmark, not very reliable, requires fiddling to get it right.
First of all I think that 10% is a huge overestimate. It's probably closer to 0.1% or less.

Second, the reason those Intel CPUs are up there is because the architecture itself is way more suitable for high clock speeds than AMD's. This has to do with some of the decisions that AMD has made when designing Zen: IPC on par with Intel's in the smallest possible core with strong multithreaded scaling. Intel has a slightly longer pipeline, which allows for higher clocks, but makes the core larger, especially if you want to keep the same IPC. That wasn't really a concern for Intel until recently, when core counts started to increase dramatically.

On top of that, PCMark and 3DMark are latency-sensitive, because the former uses MS Office including Excel, which is very latency constrained, for benchmarking, and the latter attempts to replicate gaming. AMD has simply not done as well as Intel in these particular benchmarks historically because the memory architecture is different.
 
Yes, but you're implying that there aren't a bunch of OC ultra high end gear 3900X and 3950X type systems. There are. Lots of them.

They just never show up in the top 50 or so, keeping in mind that there are many thousands of tests for these benchmarks. This implies something. The amount of core OC drops off pretty quick after the first 3 or 4 massively overclocked (6Ghz) Intel parts, and still Ryzen is a no-show.

My personal theory is that this comes down to drivers and chipsets. I've owned a few AMD systems in the past, from about 2003 to 2010 (several Athlons, and ended with the FX series) before switching to Intel. I also recently did some hard looking at the 5600XT / 5700 / 5700XT GPUs. I see the same thing with AMD I saw 10-20 years ago. Fast in a benchmark, not very reliable, requires fiddling to get it right.

lol Im not implying anything, There are lot of people who like to OC thier AMD cpus and actually do it. I have no idea where did you got that I said that AMD cpu can't be OC. Ive had mine OC for a few days and then go back to normal cause it really doens't make any sense.

On the other hand is your post actually the one implying that intel is the way to go, don't you think ?

"...This is how it stands today, right now, if for some reason you feel the need to build the fastest system you can lay hands on. These are the top recorded scores in PCMark 10 (multiple general purpose use cases), FireStrike Extreme, and Time Spy Extreme.

The processors are all made by one company... "

You are implying that since all this intel CPUs are the top ten of 3 sinthehyc benchmarks, if you build a system with them it will be the best thing you can do, and it will be the best choice for all working scenarios. And thats a whole lot of nonsense.

I will leave you a few reviews about one of the cpus of your "top ten charts", I wonder what an informed profesional would pick after seeing this:

https://www.tomshardware.com/reviews/intel-core-i9-10980xe/5


View: https://www.youtube.com/watch?v=vuaiqcjf0bs

View: https://www.youtube.com/watch?v=HxVj_cbnxco

View: https://www.youtube.com/watch?v=H0vLYcPa3uk



PD. If last year when I upgraded my i5 3570 old rig, intel had available a core i5 cpu with HT, I would have bought that one instead of my Ryzen 5 3600.
Then again after 9 months of using it Im very, very happy with my purchase. Im not fan of any brand, I just simple pick the one that has better value.

Cheers
 
D

Deleted member 1560910

Guest
I think this need to happen like 2 months ago. I think this gen intel will own Ryzen but what will next gen AMD look like is the real question qho knows maybe intel got it right and AMD will go back to the cave they crawled out of
 
I think this need to happen like 2 months ago. I think this gen intel will own Ryzen but what will next gen AMD look like is the real question qho knows maybe intel got it right and AMD will go back to the cave they crawled out of

Maybe on the next node Intel could bring back the big artillery, but right now a new iteration of the 14nm node wont send AMD back to anywhere.

Also, Why would anyone like AMD to go back to its cave?

The best thing that can happend to us, the buyers, is having a choice, and if you pay atention thanks to AMD, intel finally reacted, is trying to move on, add some features to all of its desktop segment, and it dropped its stupid high prices for the first time after years of abusing its dominant, yet rightful place as the CPU King.
Competition is good, no matter if you are AMD, Intel, Ford or Chvevrolet fan.
 

Shadowclash10

Prominent
May 3, 2020
184
46
610
It should be obvious that every player in the CPU industry HAS to release something new. It's how they compete. If we only compare i7-9700k with the i7-10700k we find that the Comet lake chip has double the threads, higher base and boost clocks, 4mb more L3 cache and has the exact same MSRP as the Coffee Lake refresh. All of that adds up to significantly higher performance, especially multi-thread performance. Sure, the power draw is very high! But considering we the consumers are still getting big performance differences out of the same pricing tier while Intel is still using 14nm... that's very impressive. Yay competition!



Anyway, back to the comparison with BD... During the BD/PD FX series at no time did AMD add additional modules or threads. Going from BD to PD spanning 2011 till Ryzen's launch in 2017 saw very small performance gains. Going from original Skylake in 2015 to Comet Lake in 2020 is like night and day! If we only look at Skylake vs Kaby Lake then yeah, I could see a comparison to AMD's FX 9590. Bump up the clocks, higher performance with higher power draw. But Intel has been adding cores, HT, cache and clocks. It's a ridiculous comparison.



Yes, everyone has to release something new. However, the reason why everyone is jumping on Intel (rightfully) is because Intel should not still be on 14nm. Comet Lake does have improvements over 9th-gen. But it is not enough. Just imagine, if Intel moved already to 10nm or 7nm, then they would have waay more massive performance improvements. AMD moved already (to 7nm), and Intel has been on Skylake for ~5 years.



Well, of course. There must be some improvements over 5 years. But the problem is that Intel isn't making the most of what they can do. Like I said above, they should have moved to a better production already. IMO AMD has made way bigger performance leaps than Intel since ~2015.

On a different note, what do you think would happen if Intel had moved to 7nm already? Would they have just blown AMD out of the water?
 
There are almost no real-world algorithms where you won't have at least three different concurrent instructions involved:

That slide from 2012 is purely academic: it illustrates a hypothetical 4-wide architecture while Sandy/Ivy Bridge which were current at the time were already 6-wide. Broadwell bumped that up to 8-wide and Sunny Cove (Ice Lake) raised it again to 10 with each load/store unit getting a dedicated AGU.
So you do understand the issue,while most things an average person will run on a PC is 3 instructions wide or maybe even 4 or 5 skylake is already at 6 so further wideding the core will not give them any additional speed.

The main reason why SMT rarely delivers more than 30% extra performance in heavily threaded workloads is simply that the scheduler is already able to keep 60+% of the cores' execution resources busy with a single thread most of the time. All SMT can do is attempt to fill the remaining 30-40% more efficiently.
Again a matter of how "wide" the software is in comparison to the core,there is plenty of software out there that does give you 100% with SMT/HTT.