News Intel 10-Core Comet Lake-S CPU Could Suck Up To 300W

Intel 10-Core Comet Lake-S CPU Could Suck Up To 300W
(edit: used meme rather than quote, because it's so fitting)

3m085h.jpg


unless pyromania is your thing. :)

Endorsed by Def Leppard.

pyromaniadeluxe.jpg
 
Last edited:
"It seems strange that Intel would not bring 10 cores to the mobile segment. "

Considering the article is based around a source claiming over 300w needed for the desktop version ... then no, this isn't strange at all ... unless pyromania is your thing. :)
At 20 threads at 4.9Ghz (if they are talking about default)running probably something like heavy AVX workloads.
More cores is always going to be a trade off for clocks in laptops,they can release one and just have it run at lowish clocks,as long as it's not too low clocks and the additional cores can still produce higher numbers it's gonna be fine.
 
  • Like
Reactions: Dragos Manea
At 20 threads at 4.9Ghz (if they are talking about default)running probably something like heavy AVX workloads.
More cores is always going to be a trade off for clocks in laptops,they can release one and just have it run at lowish clocks,as long as it's not too low clocks and the additional cores can still produce higher numbers it's gonna be fine.

Yeah that's the point ... a 10 core at 2ghz will be slow at almost everything. An 8 core at 3.0 would be better for a laptop, or even 6 cores at 3.5. Intel is clearly finding the limits of 14nm.
 
What a weird time to upgrade, when stuff like DDR5, PCI Express 5, 40Gb USB, multi gigabit ethernet is right on the horizon...
I wouldn't worry about DDR5. Maybe memory starts to be a bottleneck above 8 cores, but I think DDR4 is basically fast enough.

Er... PCIe 5 won't be coming to desktops anytime soon (i.e. next 5 years). Mark my words. You'll have to "settle" for PCIe 4.

I'm not sure how practical 40 Gbps USB 4 will be, given that it's basically Thunderbolt 3, which requires an active cable for full speed over distances > 0.5 meters. But, if you can get a board that has it, why not?

And multigigabit Ethernet is already here, for quite a while. In Dec. 2018, I bought a pair of Aquantia 10 GigE cards for $75 each. Cheap switches is what we're still waiting for. And yeah, I also want to see 2.5 Gbps as a standard feature of upper-end mainstream motherboards, but you can already find it (and faster), in some.
 
Last edited:
  • Like
Reactions: Makaveli
A lot of that stuff, like faster USB and networking hardware, can be added in with PCIe add-in cards too.

I suppose I can see the DDR5 argument though. It's unlikely to make much of a performance difference in the near-term, but if you decide to upgrade your CPU a few years down the line to one from a newer generation, you'll not only need a new motherboard to make it happen, but also new RAM, which could increase the cost of the upgrade. Of course, unless CPUs suddenly become dramatically faster, I suspect the current generation of processors should hold their own for a while.

Another thing to mention would be raytracing hardware for graphics cards. Currently only Nvidia's 20-series cards have dedicated RT acceleration, but even in those, the performance is rather lackluster, and may only get worse in the future if games target cards with faster RT performance a couple years down the line. I guess a lot of that may depend on how well the upcoming consoles will handle those effects though. Still, I could certainly see a scenario where games start focusing on raytraced lighting effects, and while they will undoubtedly continue to support traditional effects to fall back on, developers may not put as much effort into making those look good.
 
I'm not surprised by this at all.

Intels I9 9900ks is really pushing what is feasable with their current 14nm node.

The I9 9900ks draws around 250w in some cases with all 8 cores running at 5ghz turbo. It boarderline requires coolers costing over $100 in order to see acceptable temps and maximum performance.

If Intel were to add 2 more cores and have all 20 threads running at a similar around 5ghz all core boost, it doesn't take a genius to see the problem.

Until Intel gets out a new node, Intel will struggle to gain any more performance.

Meanwhile AMD is on 7nm Zen 2 and has 16 cores at low to mid 4ghz drawling a good bit less power than intels 8 core 9900ks. Plus AMD is going to have 7nm+ Zen 3 out by the end of the year according to Lisa Sue.
 
I wouldn't worry about DDR5. Maybe memory starts to be a bottleneck above 8 cores, but I think DDR4 is basically fast enough.

Er... PCIe 5 won't be coming to desktops anytime soon (i.e. next 5 years). Mark my words. You'll have to "settle" for PCIe 4.

I'm not sure how practical 40 Gbps USB 4 will be, given that it's basically Thunderbolt 3, which requires an active cable for full speed over distances > 0.5 meters. But, if you can get a board that has it, why not?

And multigigabit Ethernet is already here, for quite a while. In Dec. 2018, I bought a pair of Aquantia 10 GigE cards for $75 each. Cheap switches is what we're still waiting for. And yeah, I also want to see 2.5 Gbps as a standard feature of upper-end mainstream motherboards, but you can already find it (and faster), in some.
PCIE5 will more then likely come with am5 in 2022 since its already finished.
 
Meanwhile AMD is on 7nm Zen 2 and has 16 cores at low to mid 4ghz drawling a good bit less power than intels 8 core 9900ks. Plus AMD is going to have 7nm+ Zen 3 out by the end of the year according to Lisa Sue.
When all cores are under load they too have to run at base clocks,it's 3.5Ghz for the 16 cored one.
They only hit mid 4Ghz if only one single core is being stressed.
Physics works for everybody.
 
You wanna bet on it?

PCIe 5 is going to be one of those "server-only" features, for the foreseeable future. Maybe it'll get used inside CPU packages, but you're not going to have PCIe 5.0 motherboard slots or consumer GPUs.

Why bet on it ? you seem so sure about it where is your proof ? or is it just guessing work ?

In every thread you insist on it , fine show us proof , show us statements from AMD or Intel , and not your guessing.
 
Current hardware doesn't saturate the 3.0 bus in most cases. There is almost no reason or need for the PCIe 5.0 specification to find it's way onto consumer hardware for at least that long. I'd suspect consumer boards will ride PCIe 4.0 for at least four years UNLESS Intel decides to leapfrog for a paper victory. Wouldn't make sense though really, since nothing of consumer hardware can appreciate it anyhow.
 
  • Like
Reactions: bit_user
Current hardware doesn't saturate the 3.0 bus in most cases. There is almost no reason or need for the PCIe 5.0 specification to find it's way onto consumer hardware for at least that long. I'd suspect consumer boards will ride PCIe 4.0 for at least four years UNLESS Intel decides to leapfrog for a paper victory. Wouldn't make sense though really, since nothing of consumer hardware can appreciate it anyhow.

Thats because you are focusing only on 16 lanes PCIe 3.0 ...

16 lanes slot must be history and canceled.

Once we make the 8 lanes cards max standard , GPU will use only 8 lanes PCIe 5 , and the rest 8 lanes will be free for other cards ...

more over , you can put 10Gbit card on 1 lane only PCIe 5.0 .. Today you need 4 or 8 lanes for that on 3.0

1 lanes slots today are useless on PCIe 3.0 but on PCIe 5.0 ? almost all hardware can be put on them ... andfor cheap .. Most motherboards dont put more than one 4 lanes slts and the rest 1 lane only.

True 16 lanes are not saturated , BUT 1 and 4 ? they are Saturated today . stop thinking of Only 16 lanes GPU cards.
 
  • Like
Reactions: dimar
What a weird time to upgrade, when stuff like DDR5, PCI Express 5, 40Gb USB, multi gigabit ethernet is right on the horizon...


DDR5 is likely at least 2 years away. PCI5 is probably fruther out then that, tho possibly at the same time. Multigigabit ethernet...ya that could happen sooner, soonest would be approx 1 year.

The first gen ddr5 is likely to be slower for the same money. Historically new memory always has been more expensive for the same performance tier.

Right now is a pretty good time to upgrade. The next best time would be the end of this year when zen 3 comes out. Then you have to wait another yearish for zen4 and maybe finally something new from intel in the 2 year time frame. ddr5 will likely first come out with zen4 in ~2022 and/or whatever intel comes out in ~2022.
 
  • Like
Reactions: bit_user
I think it's reasonably impressive just how far Intel has managed to push their 14nm node, but they're clearly at the limits of what the node is capable of, and just what it was originally intended to do. I personally don't really find the new 10 core to be all that appealing, I think the 10700K 8 core with hyperthreading and the new i5 6C/12T look far more appealing to the average buyer. Not only because the cost will be down, but because they land squarely in the sweet-spot of what most buyers are looking for. Reasonably efficient performant products at reasonable prices, ( At least one would hope, but who knows with it being Intel ).

I think anyone who is looking for the multi-threaded performance of the 10 core would do better to look at the Ryzen 9 skus, both the 3900X and the 3950X, especially considering they're likely to be a bit down in price once the new Comet Lake processors are released, and with the advent of Zen 3 on the horizon, Intel really needs to transition to a new node and architecture, 10nm or otherwise, because 14nm and Skylake is clearly washed up at this point. It's still reasonably good, but they've been on the same Arch since 2015. Zen 2 is already beating Coffee Lake R in a lot of tasks, Zen 3 will only advance that lead. Intel's current predicament reminds me a lot of AMD during the Bulldozer/Piledriver days. Releasing the same <Mod Edit> at higher clock speeds in the hope of remaining relevant. Obviously it isn't a true one to one comparison since Intel still maintains a lead in some tasks, but you get the idea.
 
Last edited by a moderator:
  • Like
Reactions: bit_user
When all cores are under load they too have to run at base clocks,it's 3.5Ghz for the 16 cored one.
They only hit mid 4Ghz if only one single core is being stressed.
Physics works for everybody.
That's not quite accurate. From the reviews I've seen, the 3950X manages an all-core boost of around 4.0 GHz at stock, and a bit higher still with PBO enabled. And it can push clocks into the "mid 4GHz" range on more than one core, just not to the maximum specified 4.7GHz for single-core boost. So it's a lot better than just getting the base clock of 3.5 GHz, which like most modern desktop processors, will generally only be seen under load if the CPU is overheating.

And the same could be said for Intel's 10-core processor, which will likewise undoubtedly push all-core boost clocks well above its base clock. Of course, it will be drawing a lot more power and putting out a lot more heat to do that on the 14nm node.

How the tables have turned. Reminds me of when AMD we’re struggling to compete with the FX series and they released the space heater FX9000 range of 220w CPU’s.
AMD's Bulldozer and its derivatives were not exactly performance leaders even when the architecture was new though, and things only got worse in the years that followed. Intel, on the other hand, is still putting out a processor that's competitive from at least a performance standpoint. It sounds like most of their "10th-gen" lineup will finally be matching AMD on thread counts at roughly similar price points, so they will likely come out on top in gaming and many other tasks, even if the processors will be a lot more power hungry and in turn harder to cool. Of course, AMD can adjust prices to stay competitive, and they might not even have to for their highest core-count chips.

Thats because you are focusing only on 16 lanes PCIe 3.0 ...

16 lanes slot must be history and canceled.

Once we make the 8 lanes cards max standard , GPU will use only 8 lanes PCIe 5 , and the rest 8 lanes will be free for other cards ...
It's not like they are going to make graphics cards smaller, so I don't see x16 slots going anywhere, anytime soon.

You're also assuming that they wouldn't reduce the lane counts for processors and chipsets to match. If the lanes each provide double the bandwidth of their predecessors, and components are using half the lanes, that's probably what would happen. Keep in mind that doubling the bandwidth of each lane results in each lane becoming more expensive, so you're not really gaining anything. Especially since, even with PCIe 4.0, you start running into physical limitations for the length of the traces running to the slots, requiring the signal to be boosted in-line to reach slots further from the processor, part of why AMD's x570 boards tend to be more expensive. With PCIe 5.0, that's likely to only get worse. So, I kind of agree with the suggestion that PCIe 5.0 probably won't be on mainstream desktop computers for a number of years.
 
Last edited by a moderator:
Thats because you are focusing only on 16 lanes PCIe 3.0 ...

16 lanes slot must be history and canceled.

Once we make the 8 lanes cards max standard , GPU will use only 8 lanes PCIe 5 , and the rest 8 lanes will be free for other cards ...

more over , you can put 10Gbit card on 1 lane only PCIe 5.0 .. Today you need 4 or 8 lanes for that on 3.0

1 lanes slots today are useless on PCIe 3.0 but on PCIe 5.0 ? almost all hardware can be put on them ... andfor cheap .. Most motherboards dont put more than one 4 lanes slts and the rest 1 lane only.

True 16 lanes are not saturated , BUT 1 and 4 ? they are Saturated today . stop thinking of Only 16 lanes GPU cards.
You can run a single port 10GbE on a PCIe 4.0 x1 lane as well. Only when you get into double ports do you need an x4 slot. What will be amazing with PCIe 5.0 is being able to run a single 25GbE port on a x1 slot or quad port 25GbE on a x4 slot. But once again this is all dealing with server level equipment. For home use 99% of people can get along fine with standard 1GbE.
 
  • Like
Reactions: bit_user
As a User that believed the AMD FX hype in 2014, and actually bought an Asus M32BC AMD FX 8310 8 Core Desktop, though performance felt the same as my previous AMD A6 3620 APU based system, gaming was very laggy in a lot of games, performance just didn't feel like 8 core as marketed.

So finally had enough money and switched to Intel I7 7700 based system, with Geforce 1050 2gb, and Performance so much better, and will be even better once I upgrade video card asap here
 
That's not quite accurate. From the reviews I've seen, the 3950X manages an all-core boost of around 4.0 GHz at stock, and a bit higher still with PBO enabled. And it can push clocks into the "mid 4GHz" range on more than one core, just not to the maximum specified 4.7GHz for single-core boost. So it's a lot better than just getting the base clock of 3.5 GHz, which like most modern desktop processors, will generally only be seen under load if the CPU is overheating.
For the same type of workload that would produce 300W on the intel CPU the 3950X would run at base clocks.
Of course there are other workloads that would have lower W on the intel CPU and would also allow higher clocks on the ryzen.
The 9900k at 5Ghz only uses 73W for the witcher 3 for example...
https://www.tomshardware.com/reviews/intel-core-i9-9900k-9th-gen-cpu,5847-11.html

Of course, it will be drawing a lot more power and putting out a lot more heat to do that on the 14nm node.
People tend to forget that intel's nm is denser than anyone elses and anyone elses nm is looser than intel's.
There is not much difference between the technologies and looking at proper benchmarks does confirm that.