News Comet Lake-S CPUs Allegedly Command New LGA 1200 Socket and 400-Series Chipset

gggplaya

Distinguished
Jan 27, 2011
804
32
19,040
5
Coming so late to the party in 2020 and requiring a new expensive high end motherboard. It better hit 5.0ghz and sustain higher multicore clocks than the 3900x. Otherwise, I really don't see the point. I'd rather have the 2 extra cores of the 3900x.
 
Reactions: Metal Messiah.
Aug 6, 2019
18
8
15
0
Thanks Intel? Seems like they're just trying to get a piece of AMD's cake, but 14nm vs zen 2's 7nm is a tough sell. Asking people to stay within the same architecture but buy a new motherboard is questionable, unless they make the price competitive. Now if they confirm that their 10nm process chips will work on the LGA 1200 motherboards, that might offer better value.

I will admit, with the higher TDP and pcie 4 boards I'm definitely curious to see how well they overclock, just not enough to buy one with 10nm so close.
 
Reactions: Metal Messiah.

Soaptrail

Reputable
Jan 12, 2015
62
16
4,535
0
The author made a mistake, 1200 pins less 1151 pins is not equal to 9. 1200 pins is 49 more than 1151.

Therefore, Comet Lake-S processors will reportedly only fit into a motherboard with an LGA 1200 socket, nine more pins than the existing LGA 1151 socket.
I am not the king of grammar and spelling and errors, but i have noticed a lot of little mistakes like this on Tom's in the last year. Too much to do and too few employees?
 

jimmysmitty

Polypheme
Moderator
Coming so late to the party in 2020 and requiring a new expensive high end motherboard. It better hit 5.0ghz and sustain higher multicore clocks than the 3900x. Otherwise, I really don't see the point. I'd rather have the 2 extra cores of the 3900x.
Intel already does have higher multi core clocks. I doubt 10 cores will be able to at stock settings sustain 5GHz on all cores though, or maybe they can if Intel has done some tweaking to do so.

I have no doubt that it will clock higher and sustain higher boost clocks. The question is will the power draw be too much to justify buying this instead of waiting for say their 10nm or 7nm parts.

Thanks Intel? Seems like they're just trying to get a piece of AMD's cake, but 14nm vs zen 2's 7nm is a tough sell. Asking people to stay within the same architecture but buy a new motherboard is questionable, unless they make the price competitive. Now if they confirm that their 10nm process chips will work on the LGA 1200 motherboards, that might offer better value.

I will admit, with the higher TDP and pcie 4 boards I'm definitely curious to see how well they overclock, just not enough to buy one with 10nm so close.
By now we should all know that the process naming schemes are pointless. While the 7nm AMD used is better than Intels 14nm its not by as much as people like to think.

The mass majority of people also don't think about these things. I have talked to a lot of people who would stick with Intel since AMD has not been a major competitor for a few years. Even in the HPC and server space there is hesitation. Will take time to change.

I am still doubting we will get a 10nm desktop part considering how fast Intel is planning to push to 7nm. 10nm low power laptop parts have just arrived and if they launch in early 2020 with plans to push 7nm out in 2021 it might not make sense to release a desktop part based on 10nm.

Another NEW Socket, another new Motherboard chipset ? Solid pass... 🔫
Meh. Intel has not ever really been one to run the same board for more than two generations. So this shouldn't be a surprise to anyone really.
 
Reactions: panathas
Meh. Intel has not ever really been one to run the same board for more than two generations. So this shouldn't be a surprise to anyone really.
Yeah, I know that. This doesn't surprise me though. But come on, AMD has been giving backwards compatibility for the last few generations. Some of the new RYZEN chips support old motherboard chipsets, except the 3rd GEN RYZEN procs are not fully backwards compatible.

So why can't INTEL follow suit ?

It looks to me they are just forcing uses to upgrade ultimately, each time a new GEN CPU launches. And, there are very few Gamers who might do a complete system overhaul or upgrade in every 2 years or so, IMO.
 
Last edited:

voodoobunny

Distinguished
Apr 10, 2009
105
1
18,715
9
If it comes out in 2020, CML-S isn't really going to compete with Zen 2. At least not for long. Pretty soon it's going to be up against Zen 3. CML-S is going to have to be amazing to compete (and it really doesn't sound amazing).

On top of that, Zen 3 should still land on AM4. CML-S needs a new socket? AMD can beat Intel up about that all year! (I'm surprised they aren't making a much bigger deal about it already). If AMD are on the ball, they will include a requirement in their next socket that OEMs use BIOS chips with enough memory to guarantee 3 generations of upgrades without running out of space (the purported reason why they are having some issues with making some existing motherboards compatible), and figure out a way of guaranteeing that new chips will at least boot enough to do firmware upgrades.
 

jimmysmitty

Polypheme
Moderator
Yeah, I know that. This doesn't surprise me though. But come on, AMD has been giving backwards compatibility for the last few generation. Some of the new RYZEN chips support old motherboard chipsets, except the 3rd GEN RYZEN procs are not fully backwards compatible.

So why can't INTEL follow suit ?

It looks to me they are just forcing uses to upgrade ultimately, each time a new GEN CPU launches. And, there are very few Gamers who might do a complete system overhaul or upgrade in every 2 years or so, IMO.
My observation is that Intel builds a CPU and chipset around each other. This may entail changes to power layouts and pin count changes depending on what they add to the CPU or change for the CPU. I see no issue with that.

Currently with the performance of CPUs someone who has a 9900K or 3900x will not need to upgrade for at least 3 years most likely. I don't foresee an explosion in core utilization by games, software is typically very slow to adopt to faster hardware. Its why people with Sandy Bridge still do decently in the enthusiast space. In the mainstream consumer space this is more so. The majority of people don't need anything beyond Sandy Bridge.

Now AMD did it because they probably knew that first gen Ryzen was not going to be as competitive with Intels lineup. Allowing for more CPUs to be added to the same platform allows people to get to that point who would prefer to use AMD. However the biggest caveat in doing so is the CPU gets limited by the platform.

I have no issue with AMD doing this but I feel its more of a limitation to the CPU. If you look when Intel adds more cores or more memory channels (HEDT) they tend to increase the pin count as well. Not always nut normally it does. If AMD decided to throw another memory channel into mainstream it would require more pins. I imagine if they created a new socket for Ryzen 3000 they might have been able to better route power for the higher core count and possible improve higher clock speed stability.

The longest lasting socket for Intel was LGA775. Some older 900 series chipsets could support Penryn CPUs. Not that I would want to bottleneck one of the best LGA775 CPUs with such an old platform but a few motherboard vendors did update higher end 900 series chipsets to do so.
 
My observation is that Intel builds a CPU and chipset around each other. This may entail changes to power layouts and pin count changes depending on what they add to the CPU or change for the CPU. I see no issue with that.

Currently with the performance of CPUs someone who has a 9900K or 3900x will not need to upgrade for at least 3 years most likely. I don't foresee an explosion in core utilization by games, software is typically very slow to adopt to faster hardware. Its why people with Sandy Bridge still do decently in the enthusiast space. In the mainstream consumer space this is more so. The majority of people don't need anything beyond Sandy Bridge.

Now AMD did it because they probably knew that first gen Ryzen was not going to be as competitive with Intels lineup. Allowing for more CPUs to be added to the same platform allows people to get to that point who would prefer to use AMD. However the biggest caveat in doing so is the CPU gets limited by the platform.

I have no issue with AMD doing this but I feel its more of a limitation to the CPU. If you look when Intel adds more cores or more memory channels (HEDT) they tend to increase the pin count as well. Not always nut normally it does. If AMD decided to throw another memory channel into mainstream it would require more pins. I imagine if they created a new socket for Ryzen 3000 they might have been able to better route power for the higher core count and possible improve higher clock speed stability.

The longest lasting socket for Intel was LGA775. Some older 900 series chipsets could support Penryn CPUs. Not that I would want to bottleneck one of the best LGA775 CPUs with such an old platform but a few motherboard vendors did update higher end 900 series chipsets to do so.

Hmmm... All this indeed makes sense. I agree with some of your points. Thanks.
 

bit_user

Splendid
Herald
My observation is that Intel builds a CPU and chipset around each other. This may entail changes to power layouts and pin count changes depending on what they add to the CPU or change for the CPU. I see no issue with that.
Not really. What Intel does is introduce a new socket every second generation. They've done since Sandybridge. Like clockwork. This news should surprise absolutely no one who has been following the PC industry, for more than a few years.

Codename
Model number
Socket
Sandybridge
2000-series​
LGA-1155
Ivy Bridge
3000-series​
LGA-1155
Haswell
4000-series​
LGA-1150
Broadwell
5000-series​
LGA-1150
Skylake
6000-series​
LGA-1151
Kabylake
7000-series​
LGA-1151
Coffee Lake
8000-series​
LGA-1151 (v2)
Coffee Lake-R
9000-series​
LGA-1151 (v2)
Comet Lake
10000-series​
LGA-1200

Notice a pattern? If the thousands digit of the model number is even = new socket! Like... zOMG!!!!1111

And yes, there actually was a socketed Broadwell - the i7-5775C. It just failed to make a splash due to lackluster performance gains on their new 14 nm node and being closely followed by Skylake. It did offer a meaningful improvement in perf/W over Haswell.

The tragedy is that they seem to have blown yet another opportunity to introduce any meaningful platform changes, like some more direct-connected PCIe lanes, a better or fatter chipset connection (think DMI 4.0?), etc.

I could understand if it was too late for Coffee Lake to respond to Ryzen's x20-lane PCIe connectivity, but they certainly could've done it this time around. Hopefully, we at least get HDMI 2.1 support, or maybe even DisplayPort 2.0. At least, that would be something. By the next Intel desktop socket, we'll probably be heading into 2022 and AMD will already be moving past AM4.
 
Reactions: TJ Hooker
Jun 17, 2019
82
23
35
0
Well, if Intel were to change to a new socket then would prefer that they move to quad channel memory for mainstream desktop. Well, that is much more valid excuse for changing sockets. And with quad channel memory means more memory bandwidth which would benefit their next generation integrated GPUs. Since there will be each individual DIMM per channel, thus no more downclocking of memory when all 4 memory DIMMs are used. Also Intel's HEDT line is already rendered redundant with new AMD's Ryzen 3rd generation and AMD Threadripper HCC (high core count) CPUs.
 
Reactions: bit_user

jimmysmitty

Polypheme
Moderator
Not really. What Intel does is introduce a new socket every second generation. They've done since Sandybridge. Like clockwork. This news should surprise absolutely no one who has been following the PC industry, for more than a few years.

Codename
Model number
Socket
Sandybridge
2000-series​
LGA-1155
Ivy Bridge
3000-series​
LGA-1155
Haswell
4000-series​
LGA-1150
Broadwell
5000-series​
LGA-1150
Skylake
6000-series​
LGA-1151
Kabylake
7000-series​
LGA-1151
Coffee Lake
8000-series​
LGA-1151 (v2)
Coffee Lake-R
9000-series​
LGA-1151 (v2)
Comet Lake
10000-series​
LGA-1200

Notice a pattern? If the thousands digit of the model number is even = new socket! Like... zOMG!!!!1111

And yes, there actually was a socketed Broadwell - the i7-5775C. It just failed to make a splash due to lackluster performance gains on their new 14 nm node and being closely followed by Skylake. It did offer a meaningful improvement in perf/W over Haswell.

The tragedy is that they seem to have blown yet another opportunity to introduce any meaningful platform changes, like some more direct-connected PCIe lanes, a better or fatter chipset connection (think DMI 4.0?), etc.

I could understand if it was too late for Coffee Lake to respond to Ryzen's x20-lane PCIe connectivity, but they certainly could've done it this time around. Hopefully, we at least get HDMI 2.1 support, or maybe even DisplayPort 2.0. At least, that would be something. By the next Intel desktop socket, we'll probably be heading into 2022 and AMD will already be moving past AM4.
That was the tick-tock for the most part. It was the same before Sandy Bridge in a lot of ways minus 775. But it doesn't negate what I states. Ivy Bridge was just a die shrunk and slightly improved Sandy Bridge. Still the CPU and chipsets are typically designed together with the possibility of one future die shrink or improvement to them.

LGA 1151 has lasted but the changes between V1 and V2 is mostly power layout.

Ryzens x20 is just 4 more for NVMe. While it may present some benefits its not something that would change things enough in the long run. If it was 4 more the vendor could use then fine but they, like the normal x16 lanes, are dedicated.
 

bit_user

Splendid
Herald
Well, if Intel were to change to a new socket then would prefer that they move to quad channel memory for mainstream desktop. ... And with quad channel memory means more memory bandwidth which would benefit their next generation integrated GPUs. Since there will be each individual DIMM per channel, thus no more downclocking of memory when all 4 memory DIMMs are used. Also ... (high core count) CPUs.
Yes, bigger iGPU and/or more CPU cores would definitely benefit from quad-channel.

The main disadvantage would be cost, but that could be mitigated by having lower-end CPUs use only the first two channels and allowing lower-end motherboards to implement same. You could argue this could cause customer confusion, but I'd like to think that most PC builders can deal with this extra detail. I guess, spending more time in the non-News parts of this forum would rapidly dispel that belief.

Of course, it'd be far simpler if Intel would just try to drive their LGA 2066 platform to be more mainstream, and put their 8-core and 10-core (i.e. Comet Lake) on there. But, Intel is probably wary of creating any perceived segmentation that would keep those CPUs from being compared with AMD's AM4-based CPUs.
 

bit_user

Splendid
Herald
Ryzens x20 is just 4 more for NVMe. While it may present some benefits its not something that would change things enough in the long run. If it was 4 more the vendor could use then fine but they, like the normal x16 lanes, are dedicated.
I'm not too sure about that, but I don't care as NVMe is exactly what I want them for.
 

jimmysmitty

Polypheme
Moderator
I'm not too sure about that, but I don't care as NVMe is exactly what I want them for.
It is though. It has some wiggle room such as doing x2 NVMe and 2x SATA but who is going to pick SATA and a slower NVMe drive? No one.

The main benefit is NVMe has a direct connection to the CPU.

Now the X570 chipset has 16 additional PCIe x4 lanes but their configuration is an additional x8 slot and the other 8 are split into either multiple combos of SATA, x1/x2/x4 slots. Basically each Ryzen board, older and newer, support a total of 32 PCIe lanes, 36 if you count the x4 link between the CPU and chipset or maybe even 40 total as it might have an additional x4 for two NVMe.

Intles mainstream is 16 for the CPU with graphics and 4 for DMI. The Z390/Z370 chipset has 24 of its own PCIe lanes. So again 40 PCIe lanes, 44 if you count the DMI lanes. Biggest difference is the x4 PCIe NVMe directly to the CPU which would only probably benefit in I/O heavy workloads, which gaming and streaming are not.

I am a bit surprised Intel has not done the same just to match AMD but then again the benefit has to be there. In most situations where I/O heavy loads exist and the benefit does exist the people tend to go for HEDT systems that have more PCIe lanes on the CPU (X299 platforms with 44PCIe on the CPU for example). Thats why most creator systems are HEDT workstations and not mainstream Ryzen or Core systems.
 

bit_user

Splendid
Herald
Now the X570 chipset has 16 additional PCIe x4 lanes but their configuration is an additional x8 slot and the other 8 are split into either multiple combos of SATA, x1/x2/x4 slots. Basically each Ryzen board, older and newer, support a total of 32 PCIe lanes, 36 if you count the x4 link between the CPU and chipset or maybe even 40 total as it might have an additional x4 for two NVMe.

Intles mainstream is 16 for the CPU with graphics and 4 for DMI. The Z390/Z370 chipset has 24 of its own PCIe lanes. So again 40 PCIe lanes, 44 if you count the DMI lanes. Biggest difference is the x4 PCIe NVMe directly to the CPU which would only probably benefit in I/O heavy workloads, which gaming and streaming are not.
Okay, let's not count the chipset connection. So, that puts AMD at x36 (x16 + x4 are CPU-direct; x16 are from the chipset) and intel at x40.

The big advantages for AMD are:
  1. You can move your fastest peripheral (besides GPU) off of the chipset and to a CPU-direct connection, leaving the CPU-chipset connection to be shared among the slower stuff.
  2. The CPU-chipset link is PCIe 4.0, so there's actually twice the bandwidth to feed the various chipset-connected peripherals.
  3. And, of course, the GPU and NVMe connections are PCIe 4.0. Not generally necessary, but (as with a lot of high-end PC stuff) still kinda nice to have.
Compared to that, Intel having 4 more lanes, in total, falls kinda flat.

One thing I liked about my old Phenom II (890FX chipset, IIRC) was its x36 (PCIe 2.0) lanes*, each with an equal shot at the CPU. Now that their chipset lanes have a faster path to the CPU earlier Ryzens, it doesn't feel like a real sacrifice to upgrade.
* oops, I guess it was actually x44 lanes...​
 
Last edited:
Jun 17, 2019
82
23
35
0
Of course, it'd be far simpler if Intel would just try to drive their LGA 2066 platform to be more mainstream, and put their 8-core and 10-core (i.e. Comet Lake) on there. But, Intel is probably wary of creating any perceived segmentation that would keep those CPUs from being compared with AMD's AM4-based CPUs.
Well, Intel had done that before with Intel's Kaby Lake-X. Thus not that difficult to repurpose chips from mainstream desktop segment to HEDT segment.
 

jimmysmitty

Polypheme
Moderator
Okay, let's not count the chipset connection. So, that puts AMD at x36 (x16 + x4 are CPU-direct; x16 are from the chipset) and intel at x40.

The big advantages for AMD are:
  1. You can move your fastest peripheral (besides GPU) off of the chipset and to a CPU-direct connection, leaving the CPU-chipset connection to be shared among the slower stuff.
  2. The CPU-chipset link is PCIe 4.0, so there's actually twice the bandwidth to feed the various chipset-connected peripherals.
  3. And, of course, the GPU and NVMe connections are PCIe 4.0. Not generally necessary, but (as with a lot of high-end PC stuff) still kinda nice to have.
Compared to that, Intel having 4 more lanes, in total, falls kinda flat.

One thing I liked about my old Phenom II (890FX chipset, IIRC) was its x36 (PCIe 2.0) lanes*, each with an equal shot at the CPU. Now that their chipset lanes have a faster path to the CPU earlier Ryzens, it doesn't feel like a real sacrifice to upgrade.
* oops, I guess it was actually x44 lanes...​
I am not arguing against 4.0. It is still not needed bandwidth wise but it would be nice to have although with PCIe 5.0 coming sooner than expected it might become a moot point.

The only benefit to Intel having 4 more total PCIe lanes is more potential devices like SATA or USB, maybe even 10Gbe but otherwise as you said meh (shorter words).

And as I said the NVMe advantage would only count for people in I/O heavy workloads that would saturate the chipset to CPU connection. Majority of people buying a mainstream platform will not do so. Most who would have I/O heavy scenarios would instead go for HEDT as there are many more advantages to it over mainstream for I/O heavy scenarios, especially more memory bandwidth.
 

bit_user

Splendid
Herald
Well, Intel had done that before with Intel's Kaby Lake-X. Thus not that difficult to repurpose chips from mainstream desktop segment to HEDT segment.
Well, Kaby Lake-X would be an example of a lower-end chip that only uses a subset of the socket's memory channels and PCIe lanes. It wasn't well-received, but partly that's because people could buy the same chip for a desktop socket.

Anyway, let's set aside the idea of a big socket that's partially implemented (although another example of that would be ThreadRipper, which has the same mechanical socket as EPYC). I would really rather see Intel just make LGA 2066 more accessible.
 

jimmysmitty

Polypheme
Moderator
Well, Kaby Lake-X would be an example of a lower-end chip that only uses a subset of the socket's memory channels and PCIe lanes. It wasn't well-received, but partly that's because people could buy the same chip for a desktop socket.

Anyway, let's set aside the idea of a big socket that's partially implemented (although another example of that would be ThreadRipper, which has the same mechanical socket as EPYC). I would really rather see Intel just make LGA 2066 more accessible.
I actually think Intel should merge HEDT and Mainstream and just offer chips from low to high end that can use more memory channels. It would be like the CPUs they had that supported DDR2 and DDR3 during transitional phases. It would also make it more accessible overall.
 

bit_user

Splendid
Herald
I am not arguing against 4.0. It is still not needed bandwidth wise but it would be nice to have
IMO, the main benefit is to have it for the x4 chipset connection. Especially if you've got some bandwidth-hungry peripherals hanging off the chipset, like one or more NVMe SSDs, 10G Ethernet, etc.

although with PCIe 5.0 coming sooner than expected it might become a moot point.
No. I don't expect to see PCIe 5.0 on the desktop, anytime soon. It will burn more power and pose more mobo routing & fabrication challenges, likely increasing mobo layer count and therefore price. I'm also not sure what kind of distances it can support. Combine those reasons with a general lack of need for anything faster than 4.0, and it's safe to say it'll be a while before we see it in the mainstream, if ever.
 

jimmysmitty

Polypheme
Moderator
IMO, the main benefit is to have it for the x4 chipset connection. Especially if you've got some bandwidth-hungry peripherals hanging off the chipset, like one or more NVMe SSDs, 10G Ethernet, etc.


No. I don't expect to see PCIe 5.0 on the desktop, anytime soon. It will burn more power and pose more mobo routing & fabrication challenges, likely increasing mobo layer count and therefore price. I'm also not sure what kind of distances it can support. Combine those reasons with a general lack of need for anything faster than 4.0, and it's safe to say it'll be a while before we see it in the mainstream, if ever.
PCIe 5.0 has been speced out already. Normally if its speced it means they know they can get it to work.

And technically mainstream doesn't need anything more than PCIe 3.0. As I said before anyone who will enter into I/O heavy markets tend to navigate towards HEDT systems with more direct CPU connections than mainstream has anyways.
 

bit_user

Splendid
Herald
PCIe 5.0 has been speced out already. Normally if its speced it means they know they can get it to work.
Oh, I believe it works and will ship... in servers. We don't yet have any info on when/if it will hit desktops.

I am not arguing against 4.0. It is still not needed bandwidth wise but it would be nice to have although with PCIe 5.0 coming sooner than expected it might become a moot point.
So, you're arguing that people should hold off on PCIe 4.0, since you expect 5.0 won't be far off?

I think this is irresponsible advice, since PCIe is much more technically demanding (i.e. doubles frequencies to near theoretical limits) and we have not seen any consumer product roadmaps that include it. You don't actually know if/when it's coming.

Your position falls dangerously close to FUD.

Besides, DDR5 is a lot closer than PCIe 5.0, but you don't tell people not to buy DDR4-based systems, eh?

And technically mainstream doesn't need anything more than PCIe 3.0.
IMO, this is the reason people might want to hold off PCIe 4.0. They don't need it. On the other hand, they probably don't need an 8-core CPU or NVMe storage, either.
 

ASK THE COMMUNITY

TRENDING THREADS