News Comet Lake to Allegedly Feature 10 Cores With up to 5.3GHz Thermal Velocity Boost

AMD CTO Mark Papermaster: "you can't rely on that frequency bump from every new semiconductor node."

Evidently no one told Intel they couldn't show frequency increases with each new chip advance.
 
Last edited:
No PCIe 4.0 support with the "latest/greatest" chipset?
Isn't PCIe 4.0 the current standard?
What am I missing here, please?
I guess they figured there was no point in making the chipset PCIe 4.0 when the link to the CPU would still only be 3.0. And the CPU lanes are still 3.0 because this is another rehash of the same architecture/node they've been on since Skylake, which is why there haven't really been any new features introduced since then other than higher clock speeds and more cores.
 
Skylake refresh no. 5.
Amazing. there were 5 cpus in the "core i" infrastructure before skylake. since skylake all we've got were skylake+ more clock speed and cores. How is this innovation? Shouldn't intel be ashamed of themselves? Even more hilarious Skylake is technically a revision of broadwell, though I'll give intel some credit on this revision as they basically redid the memory infrastructure between broadwell and skylake. they haven't made any significant renovations since that memory infrastructure revision.

I'd make a joke about those 9000 series piledrivers (piledriver + more clock speed) however even AMD was ashamed of this move and dropped the idea after the first try. Intel's gone to this well 5 times now.
 
No PCIe 4.0 support with the "latest/greatest" chipset?
Isn't PCIe 4.0 the current standard?
What am I missing here, please?
When I next upgrade, I'll want PCIe4 , however the TH results of the Patriot Viper VP4100 M.2 NMVe review vs top PCIe3 NVMe were pretty underwhelming. Not what would hope given the pretty dramatic gains from HD to SSD to NVMe .
Maybe PCIe4 will show more with graphic cards eventually, but are Nvidia and AMD going to market new gen graphic cards only to the limited number of early adopters?
 
Last edited:
This isn't a new node, so I don't see how that quote is relevant.
Intel picked up another 6% increase in just frequency, and it will take actual reviews to determine if there are any other architectural features increasing performance in non-workstation uses. Not bad for an incremental change which will replace but not come at a significant price increase from 9000 series. Enough to upgrade from a 9000 series CPU----obviously not.
 
Shouldn't intel be ashamed of themselves?

They are selling everything they can make! They are even able to sell their "seconds" for near full value.

They have near record revenues, matching profits and retook the lead as the worlds largest semiconductor manufacturer.

I think they are probably proud rather than ashamed.

Gaming processors are not where the largest future profits are going to be and are not the focus of the biggest innovations and R&D efforts.
 
  • Like
Reactions: Gurg
No PCIe 4.0 support with the "latest/greatest" chipset?
Isn't PCIe 4.0 the current standard?
What am I missing here, please?
pcie 4.0 only matters for gen 4 nvme, and gen 4 gpus that have low vram buffers. So unless you really need a gen 4 nvme, which i doubt you do unless you're using it as a file transfer cache across a multiple 10gbe nics, or you plan on buying a 4gb version of the 5500xt, 5400xt, pcie gen 4 is nice but not necessary at this point in time.
 
  • Like
Reactions: Gurg
When I next upgrade, I'll want PCIe4 , however the TH results of the Patriot Viper VP4100 M.2 NMVe review vs top PCIe3 NVMe were pretty underwhelming. Not what would hope given the pretty dramatic gains from HD to SSD to NVMe .
Maybe PCIe4 will show more with graphic cards eventually, but are Nvidia and AMD going to market new gen graphic cards only to the limited number of early adopters?
GPUs cant saturate gen 3 pcie, so its doubtful that any huge leaps in architecture will allow them to saturate gen 4 pcie. The only benefit to gen 4 pcie right now is if you have a low vram buffer on your card (4gb) and you run out of vram on the card, once you switch to system ram for frame buffers you see a lot less of a performance loss compared to pcie gen 3 because of the higher throughput.
 
If the 40 pcie lanes are true, lets say 24 for the cpu and 16 for the pch. then this is a huge upgrade for the desktop market. But would intel just roll over on their HEDT lineup and push out something that directly competes on a lower cost platform? That doesnt make any sense. I would say the pcie lane count doesn't make sense for the desktop side.
 
If the 40 pcie lanes are true, lets say 24 for the cpu and 16 for the pch. then this is a huge upgrade for the desktop market. But would intel just roll over on their HEDT lineup and push out something that directly competes on a lower cost platform? That doesnt make any sense. I would say the pcie lane count doesn't make sense for the desktop side.
It's 40 "platform" lanes, so presumably 16 from the CPU and up to 24 from the PCH (assuming Z series chipset). Exact same as Z270/370/390.
 
  • Like
Reactions: bit_user and Gurg
Pcie4 is necessary for fast storage. The current controllers show a significant boost and aren't very good. New controllers that better exploit pcie4 will launch in 2h2020 if not before.

Video cards are another matter. People dying for the speed of pcie4 aren't going to buy junk memory starved video cards. So there won't be much to gain other than consuming fewer lanes with your add-on cards.
 
When I next upgrade, I'll want PCIe4 , however the TH results of the Patriot Viper VP4100 M.2 NMVe review vs top PCIe3 NVMe were pretty underwhelming. Not what would hope given the pretty dramatic gains from HD to SSD to NVMe .
Maybe PCIe4 will show more with graphic cards eventually, but are Nvidia and AMD going to market new gen graphic cards only to the limited number of early adopters?
What a joke Intel has become. Another 14 nm refresh with an incremental increase in Frequency "in certain conditions". LMAO. Their upgrades can't even keep pace with the performance hits that all their patch's to address their security flaws cause.
 
Last edited:
  • Like
Reactions: bit_user
pcie 4.0 only matters for gen 4 nvme, and gen 4 gpus that have low vram buffers. So unless you really need a gen 4 nvme, which i doubt you do unless you're using it as a file transfer cache across a multiple 10gbe nics, or you plan on buying a 4gb version of the 5500xt, 5400xt, pcie gen 4 is nice but not necessary at this point in time.

Says the teenager that thinks all things PC are centered around gaming...........

You can easily saturate the PCI 3.0 bus with a few good gen 3 nvme's in raid. And it's just not just about speed, it's about bandwith.
 
Last edited:
Pcie4 is necessary for fast storage. The current controllers show a significant boost and aren't very good. New controllers that better exploit pcie4 will launch in 2h2020 if not before.

Video cards are another matter. People dying for the speed of pcie4 aren't going to buy junk memory starved video cards. So there won't be much to gain other than consuming fewer lanes with your add-on cards.

https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-ti-pci-express-scaling/7.html


"What do these numbers spell for you? For starters, installing the RTX 2080 Ti in the topmost x16 slot of your motherboard while sharing half its PCIe bandwidth with another device in the second slot, such as an M.2 PCIe SSD, will come with performance penalties, even if they're small. These penalties didn't exist with older-generation GPUs because those were slower and didn't need as much bandwidth. Again, you're looking at 3%, which may or may not be worth the convenience of being able to run another component; that's your decision. "

"For the first time since the introduction of PCIe gen 3.0 (circa 2011), 2-way SLI on a mainstream-desktop platform, such as Intel Z370 or AMD X470, could be slower than on an HEDT platform, such as Intel X299 or AMD X399, because mainstream-desktop platforms split one x16 link between two graphics cards, while HEDT platforms (not counting some cheaper Intel HEDT processors), provide uncompromising gen 3.0 x16 bandwidth for up to two graphics cards. Numbers for gen 3.0 x8 and gen 3.0 x4 also prove that PCI-Express gen 2.0 is finally outdated, so it's probably time you considered an upgrade for your 7-year old "Sandy Bridge-E" rig. "
 
  • Like
Reactions: bit_user
Evidently no one told Intel they couldn't show frequency increases with each new chip advance.
I think they are mainly taking a page from AMD's book: "Active Core Group Tuning" could involve some selective steering of workloads to cores which clock higher.

Until now, Intel supports the max boost on any core. The chip doesn't have a preferred core, although some will undoubtedly handle more clocks than others. This was obviously leaving performance on the table.

Anyway, even Intel's old "tick tock" model delivered some clock speed gains on the same process node. Some of that probably came from ongoing process improvement, while some of it came from more experience with a given node and more insight into the best ways to milk it with uArch tweaks.
 
Last edited:
the CPU lanes are still 3.0 because this is another rehash of the same architecture/node they've been on since Skylake
I don't believe that. The PCIe controller should be self-contained enough that they could've upgraded it, if they wanted.

I think the real reason it's PCIe 3.0 is that they were caught off-guard by AMD's introduction of 4.0 to mainstream PCs, in 2019. It wasn't an obvious move - the market wasn't clamoring for it. AMD pretty much just did it because they were adding it for servers and thought "why not give it to consumers, too?"

When I next upgrade, I'll want PCIe4 , however the TH results of the Patriot Viper VP4100 M.2 NMVe review vs top PCIe3 NVMe were pretty underwhelming. Not what would hope given the pretty dramatic gains from HD to SSD to NVMe .
Wait until either Samsung releases their PCIe 4 SSDs or Intel/Micron release some Optane/3DXPoint drives. Otherwise, you're just getting controllers from mid-market players paired with probably not even the fastest NAND.

The main reason to buy into PCIe 4.0, at this time, is for future-proofing yourself. If you plan to keep your motherboard through a couple more GPU and SSD upgrades, then it would be a wise investment. As of today, the real benefits are marginal.

are Nvidia and AMD going to market new gen graphic cards only to the limited number of early adopters?
Huh? The RX 5700 is a pretty mainstream GPU and supports PCIe 4.0.
 
Last edited:
  • Like
Reactions: TJ Hooker
Even more hilarious Skylake is technically a revision of broadwell
No, it's a different uArch.

Of course, all their uArch's are evolutionary changes of their predecessors, but Broadwell had more in common with Haswell than Skylake had with Broadwell.


The main thing unchanged between Broadwell and Skylake was the manufacturing node.
 
They are selling everything they can make! They are even able to sell their "seconds" for near full value.

They have near record revenues, matching profits and retook the lead as the worlds largest semiconductor manufacturer.

I think they are probably proud rather than ashamed.
You're misreading the situation, badly.

Their 14 nm crunch is mainly due to two things:
  1. Failure of 10 nm to ramp up, when they planned.
  2. Unanticipated market demand.
As a consequence of #1, they had to respin existing designs with more cores, to keep the product pipeline filled. That meant even bigger chips, which means fewer per wafer, which further exacerbates their supply problems!

As for their current financial performance, gross margins are down. This, even despite the recent ratcheting up of their pricing, accompanying their core count increases.

Intel is currently succeeding more in spite of themselves, than anything else. If their product offerings were so great, they wouldn't need to be cranking up the TDPs, and then still overclocking their chips well beyond.
 
pcie 4.0 only matters for gen 4 nvme, and gen 4 gpus that have low vram buffers.
Not at the margins.

I did some analysis of the PCIe scaling data obtained from this review: https://www.techpowerup.com/review/pci-express-4-0-performance-scaling-radeon-rx-5700-xt/

I wrote this up, in a post on August 29th, but I can't seem to find it. 100 internets to anyone who can. Anyway, I found my spreadsheet, so I still have the data.

Mean performance increase, across all games (I think at 1080p):
v2 to v3: 1.4%​
v3 to v4: 0.8%​
v2 to v4: 2.2%​

Out of the 22 games they tested, 7 experienced more than 1% improvement between PCIe 3.0 and 4.0:

Game2.0 to 3.03.0 to 4.02.0 to 4.0
Assassin's Creed Odyssey
0.8%​
3.2%​
4.1%​
F1 2018
1.0%​
2.3%​
3.3%​
Civilization VI
5.0%​
1.7%​
6.8%​
Wolfenstein 2
4.1%​
1.6%​
5.8%​
Divinity Original Sin 2
1.0%​
1.4%​
2.4%​
Sekiro
0.8%​
1.3%​
2.2%​
Rage 2
5.5%​
1.2%​
6.7%​

So, we can see that PCIe 4.0 isn't without benefit, even if it's not huge. However, I think we can expect to see the gains further accrue, with even faster GPUs and CPUs. And, especially if you're running dual GPUs at only x8, each.

Still, I would agree that PCIe 4.0 is hardly a must-have feature, for most. Not bad future-proofing to have, but not worth much, today.
 
  • Like
Reactions: Gurg
The main reason to buy into PCIe 4.0, at this time, is for future-proofing yourself. If you plan to keep your motherboard through a couple more GPU and SSD upgrades, then it would be a wise investment. As of today, the real benefits are marginal.

Huh? The RX 5700 is a pretty mainstream GPU and supports PCIe 4.0.
The RX5700 can't even match the performance of a three year old 1080ti.
 
I think they are mainly taking a page from AMD's book: "Active Core Group Tuning" could involve some selective steering of workloads to cores which clock higher.

Until now, Intel supports the max boost on any core. The chip doesn't have a preferred core, although some will undoubtedly handle more clocks than others. This was obviously leaving performance on the table.

Turbo Boost 3.0, released in 2016 with Broadwell-E, ranks all cores in order of performance which is stored on the CPU during manufacturing. Single and dual core boosted loads are sent to the best cores.
 
  • Like
Reactions: bit_user