News Comet Lake-S CPUs Allegedly Command New LGA 1200 Socket and 400-Series Chipset

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Oh, I believe it works and will ship... in servers. We don't yet have any info on when/if it will hit desktops.


So, you're arguing that people should hold off on PCIe 4.0, since you expect 5.0 won't be far off?

I think this is irresponsible advice, since PCIe is much more technically demanding (i.e. doubles frequencies to near theoretical limits) and we have not seen any consumer product roadmaps that include it. You don't actually know if/when it's coming.

Your position falls dangerously close to FUD.

Besides, DDR5 is a lot closer than PCIe 5.0, but you don't tell people not to buy DDR4-based systems, eh?


IMO, this is the reason people might want to hold off PCIe 4.0. They don't need it. On the other hand, they probably don't need an 8-core CPU or NVMe storage, either.

Not my point at all actually. I don't think people should hold off just that depending on how fast it gets here PCIe 4.0 might not be a major thing, much like how SATAExpress was basically dead in the water since NVMe came out around the same time and was vastly faster and better since it was designed for SSDs specifically.

Lets look at PCIe 4.0. It was finalized and released in October of 2017. Then in June 2019 we had its first official mainstream launch Zen 2). Thats 20 months from finalizing to launch. If PCIe 5.0 follows suit, I am not saying it will but if it does, we could see a mainstream product with it in January of 2021. It could even come faster. Intel might pick it up to have a one up on AMD for whatever they launch in 2021 or AMD might move to it with a new socket since AM4 is supported till 2020.

Thats all I am saying. I never tell people not to buy because no matter when you do in less than a year something better comes out. But PCIe 5.0 was finalized before the first PCIe 4.0 product came out which almost makes it a moot interface like SATAExpress.
 
Lets look at PCIe 4.0. It was finalized and released in October of 2017. Then in June 2019 we had its first official mainstream launch Zen 2). Thats 20 months from finalizing to launch. If PCIe 5.0 follows suit, I am not saying it will but if it does, we could see a mainstream product with it in January of 2021. It could even come faster. Intel might pick it up to have a one up on AMD for whatever they launch in 2021 or AMD might move to it with a new socket since AM4 is supported till 2020.
It's one thing to speculate on when it might arrive, but you should be careful to warn people off PCIe 4.0 without any information on actual availability of PCIe 5.0.

I never tell people not to buy because no matter when you do in less than a year something better comes out. But PCIe 5.0 was finalized before the first PCIe 4.0 product came out which almost makes it a moot interface like SATAExpress.
Again, you're giving no regard to the cost & practical compromises involved in PCIe 5.0. Allegedly, PCIe 4.0 is why AMD had to add a chipset fan to X570 boards, and PCIe 5.0 will use even more power.

Unlike SATA Express, PCIe 4.0 peripherals can still be used @ PCIe 4.0 speeds in a PCIe 5.0 system. This makes it not a wasted investment. And a better analogy for PCIe 5.0 is probably 10 Gigabit Ethernet, where it got used in the enterprise, cloud, and datacenters for over a decade, before finally starting to trickle down to the mainstream.

And, to further the analogy, a draft of PCIe 6.0 has already been released. It could get even finalized before PCIe 5.0 products launch. That's like how 100 Gbps Ethernet is already in use, even though most consumers & small businesses don't yet have 10 Gbps. It proves that you can have an enterprise standard that's several iterations ahead of where consumers are at.
 
Last edited:
It's one thing to speculate on when it might arrive, but you should be careful to warn people off PCIe 4.0 without any information on actual availability of PCIe 5.0.


Again, you're giving no regard to the cost & practical compromises involved in PCIe 5.0. Allegedly, PCIe 4.0 is why AMD had to add a chipset fan to X570 boards, and PCIe 5.0 will use even more power.

Unlike SATA Express, PCIe 4.0 peripherals can still be used @ PCIe 4.0 speeds in a PCIe 5.0 system. This makes it not a wasted investment. And a better analogy for PCIe 5.0 is probably 10 Gigabit Ethernet, where it got used in the enterprise, cloud, and datacenters for over a decade, before finally starting to trickle down to the mainstream.

I am not warning anyone off anything.

And PCIe 4.0 didn't. It was finalized and pushed out very fast.

As for PCIe 5.0 and power I can't say yes or no. It depends on the process size. AMDs X570 chipset is manufactured on the 14nm process size so that could contribute to higher temps/power draw which may be mitigated with a smaller process. Of course smaller process can up the cost and X570 boards are pricier than X470 boards.

Again I am just stating we may see it sooner than expected considering how fast PCISIG finalized it.
 
Mainboard MFRs are making tons with the new hardware needed and the AMD surge, which should also be encouraging them to push AMD

Intel probably truly needs a new chipset/socket for the CPU beyond the Comet Lake S - and by introducing it early, they also can try to compete in the supply/partner channels with mainboard mfrs, as it won't be only AMD giving these companies a jump in consumer business.

Just my two cent theory I wanted to throw out.
 
And PCIe 4.0 didn't. It was finalized and pushed out very fast.
Didn't "what"? You mean there wasn't an inordinate delay between its finalization and when AMD brought it to the mainstream? Maybe that had something to do with the 8.5 years between PCIe 3.0's finalization and the launch of Ryzen 3k. That gave CPUs, GPUs, SSDs, and systems a chance to mostly catch up to PCIe 3.0's capabilities, so that launching 4.0, at this time, is not entirely pointless. However, perhaps Intel didn't agree, since it appears AMD's move caught them by surprise.

To go back to the Ethernet analogy, gigabit Ethernet was adopted much more promptly than 10 GigE. Sometimes, tech just gets ahead of where it's worthwhile or cost-effective for consumers.

As for PCIe 5.0 and power I can't say yes or no. It depends on the process size.
The bigger concern is that it takes a certain amount of power to send a signal down a wire of a certain length, at a certain frequency. That doesn't change as a function of the process size of the IC, which affects only things like switching and routing efficiency. PCIe 5.0 is at about the limit of frequencies it's practical to use for this purpose. I'm not an EE, but from what I've read about PCIe 5.0 and 6.0 (which doesn't increase clocks, but uses a different multiplexing scheme to shove more bits down the wire), it sounds like PCIe 5.0 will definitely add cost and possibly other compromises.

Again I am just stating we may see it sooner than expected considering how fast PCISIG finalized it.
It's already on Intel's server roadmaps, where it's needed for AI accelerators, 100+ Gbps networking, and SSD storage arrays. Ice Lake-SP is supposed to get PCIe 4.0, in Q2 of 2020. The jump to 5.0 should come with Sapphire Rapids, in Q1 of 2021.

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS8xL1gvODM4NTgxL29yaWdpbmFsLzE5MDUyMl9pbnRlbC1zZXJ2ZXItcm9hZG1hcC1hcHJpbC0yMDE5X2h1YXdlaS5wbmc=

However, their workstation CPUs until 2021 are/were slated to remain on PCIe 3.0. No mention of it, for consumer products, but the Comet Lake Xeon E is basically from the same line as the Comet Lake S that's destined for consumer desktops. And that confirms the article's claim of PCIe 3.0.

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9EL0EvODM1MTAyL29yaWdpbmFsLzAxLkpQRw==
 
Last edited:
Didn't "what"? You mean there wasn't an inordinate delay between its finalization and when AMD brought it to the mainstream? Maybe that had something to do with the 8.5 years between PCIe 3.0's finalization and the launch of Ryzen 3k. That gave CPUs, GPUs, SSDs, and systems a chance to mostly catch up to PCIe 3.0's capabilities, so that launching 4.0, at this time, is not entirely pointless. However, perhaps Intel didn't agree, since it appears AMD's move caught them by surprise.

To go back to the Ethernet analogy, gigabit Ethernet was adopted much more promptly than 10 GigE. Sometimes, tech just gets ahead of where it's worthwhile or cost-effective for consumers.


The bigger concern is that it takes a certain amount of power to send a signal down a wire of a certain length, at a certain frequency. That doesn't change as a function of the process size of the IC, which affects only things like switching and routing efficiency. PCIe 5.0 is at about the limit of frequencies it's practical to use for this purpose. I'm not an EE, but from what I've read about PCIe 5.0 and 6.0 (which doesn't increase clocks, but uses a different multiplexing scheme to shove more bits down the wire), it sounds like PCIe 5.0 will definitely add cost and possibly other compromises.


It's already on Intel's server roadmaps, where it's needed for AI accelerators, 100+ Gbps networking, and SSD storage arrays. Ice Lake-SP is supposed to get PCIe 4.0, in Q2 of 2020. The jump to 5.0 should come with Sapphire Rapids, in Q1 of 2021.

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS8xL1gvODM4NTgxL29yaWdpbmFsLzE5MDUyMl9pbnRlbC1zZXJ2ZXItcm9hZG1hcC1hcHJpbC0yMDE5X2h1YXdlaS5wbmc=

However, their workstation CPUs until 2021 are/were slated to remain on PCIe 3.0. No mention of it, for consumer products, but the Comet Lake Xeon E is basically from the same line as the Comet Lake S that's destined for consumer desktops. And that confirms the article's claim of PCIe 3.0.

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9EL0EvODM1MTAyL29yaWdpbmFsLzAxLkpQRw==

I would think Intel didn't see the need for PCIe 4.0 on mainstream yet and I don't disagree. They have been trying to push Optane as their solution for storage with the eventuality, I would hope, of having NVDIMMs instead of a different storage class.

No consumer GPU saturates PCIe 3.0 yet and I don't think we will for a few more years.

That said I didn't say it was pointless. I said that with the fast finalization of PCIe 5.0 it might make it a moot point to have PCIe 4.0 now. Thats all I said. I wasn't even originally referencing Intel per say but hell even AMD could move to it faster than we think.
 
No consumer GPU saturates PCIe 3.0 yet and I don't think we will for a few more years.
That's not really true. Gains have been shown, even a while back, if small (few %). It's more important, for those trying to run at extremely high framerates.

The other thing about PCIe is that increasing bandwidth has the effect of decreasing latency. This occurs because the less time data transfers take, the sooner the data can be used. This is important for use cases like VR and eSports.

We're not talking about enormous differences, but given all the other things that people do to milk a little more performance, it's worth considering.

Finally, PCIe 4.0 can let you max out a NMVe SSD on just a x2 connection, where PCIe 3.0 would've needed a x4 connection. For something like the CPU <-> south bridge connection, it makes a lot of sense. You might have two NVMe drives connected through it, so you'd get faster drive-to-drive copies if they wren't bottle-necked by the south bridge <-> CPU link.

Anyway, it's wrong to think of PCIe link bandwidth as something that has to be saturated, before benefits are realized by upgrading. It doesn't really work that way. That view only really applies to a networking link that aggregates a lot of traffic.

I said that with the fast finalization of PCIe 5.0 it might make it a moot point to have PCIe 4.0 now. Thats all I said.
If it were that simple, then sure. But, there's physics involved. It's not as simple as just flipping a switch or changing a protocol. Anyway, we'll see how it plays out.

Edit: here are some good PCIe benchies, for you. Once upon a time, Tom's would've been all over such an issue. Anyway, the article downplays the results, in their conclusion. However, a few (especially lower-resolution) are actually significant!


Keep in mind that this is a mid-range card. Perhaps the results would be even more pronounced wit a faster GPU.
 
Last edited:
Edit: here are some good PCIe benchies, for you. Once upon a time, Tom's would've been all over such an issue. Anyway, the article downplays the results, in their conclusion. However, a few (especially lower-resolution) are actually significant!


Keep in mind that this is a mid-range card. Perhaps the results would be even more pronounced wit a faster GPU.
So, when I crunch the numbers, I get a mean improvement of 1.4% for PCIe 2.0 to 3.0 and 0.8% from 3.0 to 4.0. However, going from 2.0 to 4.0 is 2.2%.

Now, that doesn't tell the whole story. Some games are more sensitive to PCIe bottlenecks than others. Here are the results from the most sensitive:

GamePCIe 3 vs. PCIe 2PCIe 4 vs PCIe 3PCIe 4 vs PCIe 2
Civilization VI5.0%1.7%6.8%
Rage 25.5%1.2%6.7%
Wolfenstein 24.1%1.6%5.8%
Assassins Creed Od.0.8%3.2%4.1%
F1 20181.0%2.3%3.3%
Battlefield V2.3%0.7%3.0%
Divinity Orig Sin 21.0%1.4%2.4%
Sekiro0.8%1.3%2.2%
mean2.6%1.7%4.3%

And this data was collected on a Ryzen 9 3900X and Radeon RX 5700 XT, at stock settings, with launch-day drivers. Improve any one of those variables and the differences should only become more stark.

So, you really can't argue that it makes no difference. These kinds of improvements are on par with a couple hundred MHz of overclocking or using a faster RAM speed - things people do all the time. For any enthusiast looking to milk the most performance from their system, PCIe should not be overlooked.

That said, just like some games are more CPU or GPU bottlenecked, the impact of PCIe bandwidth affects different games differently.
 
At the very least, it will likely push more frustrated consumers toward Team Red.

Check your facts. AMD's colour just like nVIDIA's is green, former ATi's colour including Radeon brand who was acquired by AMD is red.
 
Check your facts. AMD's colour just like nVIDIA's is green, former ATi's colour including Radeon brand who was acquired by AMD is red.
Good point. I do have old AMD CPU boxes with black, white, and green. But, this site has been using Red as a shorthand for AMD for so long... possibly since the ATI days - that most of us are probably just used to it.

Funny thing is, when I go to amd.com, I don't really see any green, other than the XBox logo. However, there's the red illumination of a Radeon graphics card and a red fireball directing visitors towards the consumer graphics products. And those pages continue the mostly black motif, with red highlights.

Even the "About AMD" page doesn't feature green, nor does the CPU landing page, which is grey, black, and white, with orange highlights.

Based on that one visit, I'd say AMD's colors are white and black. I don't know if that's true, but I certainly wouldn't connect them to green. Ryzen seems to have a distinct orange motif, while Radeon does in fact seem to continue with a red theme.
 
with the fast finalization of PCIe 5.0 it might make it a moot point to have PCIe 4.0 now. Thats all I said. I wasn't even originally referencing Intel per say but hell even AMD could move to it faster than we think.
FYI, I ran across a short article outlining the added costs & challenges of PCIe 4 & 5:


Practically the whole article is quotable, but this should whet your appetite:
The big tradeoff of the higher speeds is that signals won’t travel as far on existing designs. In the days of PCIe 1.0, the spec sent signals as much as 20 inches over traces in mainstream FR4 boards, even passing through two connectors. The fast 4.0 signals will peter out before they travel a foot without going over any connectors.

So system makers are sharpening their pencils on the costs of upgrading boards and connectors, adding chips to amplify signals, or redesigning their products to be more compact.

It's a nice, short read. I just thought you might be interested.
 
FYI, I ran across a short article outlining the added costs & challenges of PCIe 4 & 5:


Practically the whole article is quotable, but this should whet your appetite:


It's a nice, short read. I just thought you might be interested.

Costs will eventually be solved. Everything starts this way. 10Gbe was originally a large PCIe add on card. It has since scaled down and will, eventually, scale down enough to integrate on the motherboard or chipset.

I do agree though distance is always the biggest limiter in the beginning. I say beginning because like with everything there will be a breakthrough that helps us achieve whats needed to move forward. It may take longer or less depending on who is pushing it.

Think of CAT cable, a good example. Right now CAT5e, the most commonly used, can do 1Gbps over 100M or 328 feet. We personally make the cut off 300 feet as after that you start to get a lot of packet loss. CAT6 allowed for 10Gbps but only up to 55M, although that was a stretch too. There are other CAT6 standards but CAT6a is supposed to allow for 10Gbps at 100M with the rumored speeds for CAT7 to be 100Gbps at 15M, 40Gbps at 50M.

It may take time or it may not. I guess we shall see but yes its an interesting article as it lays out a very common issue across all of computing, how to fit more bandwidth into the same space and the same distance. I honestly think we might hit a point where (might already be there) where we need to look at integrating fiber into silicon to gain bandwidth:

https://www.tomshardware.com/news/intel-silicon-photonics-transceiver-400g,39028.html
 
Costs will eventually be solved. Everything starts this way. 10Gbe was originally a large PCIe add on card. It has since scaled down and will, eventually, scale down enough to integrate on the motherboard or chipset.
You're just blindly extrapolating trends. "Technology always gets smaller and cheaper", until it doesn't. Every trend holds... until it doesn't.

10 Gigabit Ethernet burns more power and has shorter range than 1 GbE and that will never change. There are physical limits to what you can do with copper wires.

You should really read the article I linked. It's not long, but it is 2 pages, which I missed at first.

Think of CAT cable, a good example.
Actually, it is. But, not for the reasons you're thinking. CAT6 costs more than CAT5e. CAT6a costs more than CAT6. This underscores my point, that pushing past certain limits starts to incur unavoidable cost increases.

Looking at costs of 100' of Ethernet, here's what I see on Newegg (filtered on Newegg as the seller, to eliminate outliers):
Type
Base Price
Shipping
Total
CAT5$14.46$0.00$14.46
CAT5E$15.47$2.99$18.46
CAT6$13.09$6.05$19.14
CAT6E$21.99$0.00$21.99
CAT6A$27.49$2.99$30.48
CAT7$32.98$0.00$32.98

So, CAT6A is costing more than double good 'ol CAT5. Why? Because the amount and quality of the materials and construction processes are higher. So, it will never magically just cost the same as CAT5. There's only so far that economies of scale will get you.

I honestly think we might hit a point where (might already be there) where we need to look at integrating fiber into silicon to gain bandwidth:
Yes. I think we can agree on that. People have been forecasting it for decades, but we might be finally nearing the point where optical interconnects are needed between components inside the box.
 
You're just blindly extrapolating trends. "Technology always gets smaller and cheaper", until it doesn't. Every trend holds... until it doesn't.

10 Gigabit Ethernet burns more power and has shorter range than 1 GbE and that will never change. There are physical limits to what you can do with copper wires.

You should really read the article I linked. It's not long, but it is 2 pages, which I missed at first.


Actually, it is. But, not for the reasons you're thinking. CAT6 costs more than CAT5e. CAT6a costs more than CAT6. This underscores my point, that pushing past certain limits starts to incur unavoidable cost increases.

Looking at costs of 100' of Ethernet, here's what I see on Newegg (filtered on Newegg as the seller, to eliminate outliers):
Type
Base Price
Shipping
Total
CAT5$14.46$0.00$14.46
CAT5E$15.47$2.99$18.46
CAT6$13.09$6.05$19.14
CAT6E$21.99$0.00$21.99
CAT6A$27.49$2.99$30.48
CAT7$32.98$0.00$32.98

So, CAT6A is costing more than double good 'ol CAT5. Why? Because the amount and quality of the materials and construction processes are higher. So, it will never magically just cost the same as CAT5. There's only so far that economies of scale will get you.


Yes. I think we can agree on that. People have been forecasting it for decades, but we might be finally nearing the point where optical interconnects are needed between components inside the box.

You are taking a lot of what I say the wrong way. I never said it will cost the same however there are plenty of examples. CPUs. Sure it feels like we pay more but we can now get an 8 core CPU for sub $400. 5 years ago 4 cores was sub $400.

We have hit walls before and eventually something came along and pushed us again. There will be an eventual breakthrough that will push us forward again.

I am not trying to predict history and say what is absolutely going to happen but using past history as an example of what is possible.