News Intel 10-Core Comet Lake-S CPU Could Suck Up To 300W

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

bit_user

Polypheme
Ambassador
Thanks. Unfortunately, that's SFP+ and only 4 ports. I got a pair of 10 Gig SFP+ Ethernet cards on ebay, several years ago, but I only use them via direct link. My other two 10 Gig cards are RJ-45, as are motherboards with 10 Gig integrated, and most high-end NAS boxes with the feature. I was looking to standardize on RJ-45.

I know I could get RJ-45 SFP+ adapters, but that increases the cost, introduces the risk of compatibility issues, and, anyway, I'd really like more than 4 ports.

I think 2.5 or 5 Gbps will start to go "mainstream", when sub-$100 switches start to include at least 2 such ports. I have a sense that's about when gigabit took off.
 
  • Like
Reactions: TJ Hooker

bit_user

Polypheme
Ambassador
In every thread you insist on it , fine show us proof , show us statements from AMD or Intel , and not your guessing.
I've posted plenty of links explaining the additional costs and limitations of PCIe 5. My case is that it doesn't make sense , on a desktop platform. Not from an economic or practical standpoint. However, I can't "prove" that someone won't do something crazy and nonsensical.

Moreover, manufacturers tend to be careful about what they say. They are unlikely to say they categorically won't do something, even if they consider it highly unlikely.

If that's not good enough for you, then I guess that's your problem. You are free to believe what you like, whether or not it's backed by sound reason, physics, economics, supply & demand, etc.
 
  • Like
Reactions: TJ Hooker

bit_user

Polypheme
Ambassador
Thats because you are focusing only on 16 lanes PCIe 3.0 ...
You keep focusing on just one side of this. You could use the same logic to make the case that we should have PCIe 7.0, so that graphics cards could run on a single lane. No one is disagreeing that there would be benefits from faster speeds.

The problem is that you're ignoring the technical issues. Also, the fact that there's not an easy way to ratchet down the interface width of graphics cards and motherboards is just the icing on the cake.

you can put 10Gbit card on 1 lane only PCIe 5.0 .. Today you need 4 or 8 lanes for that on 3.0
No, only x4 lanes @ PCIe 2.0 or x2 @ PCIe 3.0.

Unfortunately, last I checked, the market is not clamoring for 10 Gigabit. And most people will not jump on that bandwagon until it's integrated into their motherboard.

Even so, modern motherboards have plenty of chipset lanes that you could use. And with PCIe 4.0, X570 boards' chipsets have an 8 GB/sec (per-direction) link to the CPU. Since 10 Gbps is roughly 1.25 GB/sec, there's really no reason why you need more CPU-direct lanes for 10 Gigabit Ethernet.
 
Last edited:
  • Like
Reactions: TJ Hooker
Nope. Not necessarily. Base clocks are the minimum guaranteed clocks for multi-core loads within spec. There's nothing saying that a light multi-core load can't run above base clocks.
That's exactly what I said.
The load that produces 250W on the 9900k is the exact opposite of light multi-core load though and running that same load on a stock ryzen will keep it at base clocks.
 
Unfortunately, last I checked, the market is not clamoring for 10 Gigabit. And most people will not jump on that bandwagon until it's integrated into their motherboard.
I think one reason is that most people probably wouldn't see much benefit from faster home networking currently, since 1 Gigabit ethernet is close enough to the speed of a typical bulk-storage drive as it is, and enough to handle pretty much any home internet connection. With large SSDs coming down in price I could see more people considering them for network file storage soon enough though.
 

TJ Hooker

Titan
Ambassador
When all cores are under load they too have to run at base clocks,it's 3.5Ghz for the 16 cored one.
They only hit mid 4Ghz if only one single core is being stressed.
Physics works for everybody.
It looks like the 3950X can maintain ~3.9 GHz under all core load at stock settings. Although that will likely vary from one chip to another.
 
Last edited:
  • Like
Reactions: bit_user

bit_user

Polypheme
Ambassador
I think one reason is that most people probably wouldn't see much benefit from faster home networking currently,
Yes, most people. Gigabit is still more than enough for internet and even in-home video streaming.

1 Gigabit ethernet is close enough to the speed of a typical bulk-storage drive as it is,
No, that's not even true for hard drives. The media transfer rate for modern hard drives (i.e. single disk) is already almost double that. In a NAS with 4-disk RAID 5, you could triple that, yet again.

With large SSDs coming down in price I could see more people considering them for network file storage soon enough though.
Could be, but perhaps the main use case will be people wanting to do quicker backups of their laptops over Wifi 6.
 
Jan 8, 2020
13
3
15
Motherboard manufacturers upset as Intel can't get power-draw stable on its latest 10 core 14nm flagship CPU

Intel 10-Core Comet Lake-S CPU Could Suck Up To 300W : Read more
Thing is Intel doesnt support fast memory under DDR4 so there's hardly any rush to support DDR5 when currently it isn't showing much in the way of real life benefits. Same thing happened with the mover between DDR2 and 4, it will take a while to see the benefit.
 
No, that's not even true for hard drives. The media transfer rate for modern hard drives (i.e. single disk) is already almost double that. In a NAS with 4-disk RAID 5, you could triple that, yet again.
Well, that's why I said "close enough". : P

It's at least in the general ballpark of hard drive speeds. The performance shouldn't be that much of a concern for most common uses like streaming video, or downloading video and saving it directly to a networked drive, nor should it be an issue for small to moderate file transfers that won't take long anyway. The only time one would be likely to really notice the limited performance would be for large, multi-gigabyte transfers to or from a local drive. So, maybe for things like backups, though I imagine most automatic backup systems will be working unattended anyway. There are certainly some use-cases that could benefit from more performance, like those regularly tansfering large files over their network for something like video editing, but "most people" probably wouldn't have a lot to gain from it at this time, preventing it from really becoming mainstream. I'm sure we'll see higher-speed networking hardware becoming more common in the coming years though, even if its mostly just 2.5 and 5 Gigabit for a while.

Could be, but perhaps the main use case will be people wanting to do quicker backups of their laptops over Wifi 6.
The typical real-world performance of a Wifi 6 router connecting to a portable device probably isn't going to significantly exceed 1 Gigabit, and for most existing devices won't even hit that. It may help with maintaining performance in multi-user scenarios, but overall I think the performance gains of Wifi 6 won't be that large over 802.11ac.
 
Well, that's why I said "close enough". : P

It's at least in the general ballpark of hard drive speeds. The performance shouldn't be that much of a concern for most common uses like streaming video, or downloading video and saving it directly to a networked drive, nor should it be an issue for small to moderate file transfers that won't take long anyway. The only time one would be likely to really notice the limited performance would be for large, multi-gigabyte transfers to or from a local drive. So, maybe for things like backups, though I imagine most automatic backup systems will be working unattended anyway. There are certainly some use-cases that could benefit from more performance, like those regularly tansfering large files over their network for something like video editing, but "most people" probably wouldn't have a lot to gain from it at this time, preventing it from really becoming mainstream. I'm sure we'll see higher-speed networking hardware becoming more common in the coming years though, even if its mostly just 2.5 and 5 Gigabit for a while.
Even transferring a 40GB Blu-ray only takes about 5 minutes with 1GbE. That is about the max file size that a home user will ever transfer. Unless you are consistently transferring 250GB+ there isn't much of a call for faster than 1GbE for the home user.
 

bit_user

Polypheme
Ambassador
Well, that's why I said "close enough". : P

It's at least in the general ballpark of hard drive speeds.
What?? No. A factor of 2 is significant. And the factor of 5 or 6, that you can get with a 4-disk NAS, is massive.

The only time one would be likely to really notice the limited performance would be for large, multi-gigabyte transfers to or from a local drive.
I'm a corner case, but I have my homedir on my fileserver, where I keep the source code of projects I'm working on. When I do software builds, it results in a flurry of transactions that are heavily affected by the connection bandwidth.

I'm sure we'll see higher-speed networking hardware becoming more common in the coming years though, even if its mostly just 2.5 and 5 Gigabit for a while.
5 Gbps is basically enough for the 4-disk NAS scenario I mentioned, assuming they're mechanical disks. But, if 3D QLC NAND prices continue to drop, you could have more people going for SSD-based NAS setups, where we could already talk about bonded 10 Gbps links.
 

bit_user

Polypheme
Ambassador
Even transferring a 40GB Blu-ray only takes about 5 minutes with 1GbE.
That's a long time, if you're sitting there and waiting for it.

That is about the max file size that a home user will ever transfer.
Well, there's the example I gave of a full backup. 4K+ video files, that people shoot on their phones, can also really add up.

Unless you are consistently transferring 250GB+ there isn't much of a call for faster than 1GbE for the home user.
That's utter rubbish.

It really has to do with your workflow. If you frequently have to sit and wait for data to copy, no matter how big, then you'd be wise to go with a faster network connection, since that's the bottleneck.
 
What?? No. A factor of 2 is significant. And the factor of 5 or 6, that you can get with a 4-disk NAS, is massive.
Again, I'm referring to the needs of "most users" setting up a home network, who for most common home networking tasks probably won't notice a huge difference. It's not like the performance difference when moving from a 100 megabit network to a gigabit network, for example, where there should be a very noticeable difference in transfer performance with just about any drive. Something like a RAID 5 NAS can probably be considered a pretty niche setup as well, as far as mainstream users are concerned, and mainstream users will be a big part of what dictates pricing of hardware. Most are likely just interested in getting the full performance of their Internet connection, and don't care about much else. Compiling software from files stored on a network falls more along the lines of a professional usage scenario, so it's not surprising to see prices targeting businesses.
 

spongiemaster

Admirable
Dec 12, 2019
2,276
1,280
7,560
Again, I'm referring to the needs of "most users" setting up a home network, who for most common home networking tasks probably won't notice a huge difference. It's not like the performance difference when moving from a 100 megabit network to a gigabit network, for example, where there should be a very noticeable difference in transfer performance with just about any drive. Something like a RAID 5 NAS can probably be considered a pretty niche setup as well, as far as mainstream users are concerned, and mainstream users will be a big part of what dictates pricing of hardware. Most are likely just interested in getting the full performance of their Internet connection, and don't care about much else. Compiling software from files stored on a network falls more along the lines of a professional usage scenario, so it's not surprising to see prices targeting businesses.
This is the internet, where the extreme niche is treated like the norm. You're right. The people around me just want network performance that won't bottleneck their internet connection. They're not running ethernet cables all through out their house, to connect their NAS RAID boxes and professional gamer's league room. Typically, there is one computer sitting next to the internet modem/wifi router and everything else is connected through wifi.
 
That's a long time, if you're sitting there and waiting for it.
Apparently you have never heard of doing something else while that is going on....

Well, there's the example I gave of a full backup. 4K+ video files, that people shoot on their phones, can also really add up.
Most 4k videos that people shoot on their phones aren't going to be 10+ minutes in length, they will be 1-2 minutes so the file size will be 10GB at most.

That's utter rubbish.

It really has to do with your workflow. If you frequently have to sit and wait for data to copy, no matter how big, then you'd be wise to go with a faster network connection, since that's the bottleneck.
If you are doing things like that you ARE NOT the normal home user. Not to mention that unless you have a RAID5, 10, 50 setup or you are using Datacenter Grade Helium drives, your network will barely be the bottleneck. The WD Black 6TB has an "internal transfer" speed of 227MB/sec, however, that is from the buffer to the drive. Once you get past the buffer the actual sequential performance will be closer to 150MB/sec +/- 25MB/sec. Again this is all dealing with sequential data and once it isn't sequential then performance will be far lower. Now if you are using a Ultrastar DC HC530 (14TB Helium drive) that has a sustained sequential transfer speed of 267MB/sec, you could see a difference in performance assuming it will only be sequential data.

Don't get me wrong I would love to have a 25GbE setup at my home with a large RAID 50 NAS. But that is due to me being a VMware Admin who has a software defined storage setup at work running off of quad 25GbE ports. But due to this I would NOT be a typical home user and even with the file transfers that I do, the benefit of going to even 5GbE would be minimal.
 

bit_user

Polypheme
Ambassador
Again, I'm referring to the needs of "most users" setting up a home network, who for most common home networking tasks probably won't notice a huge difference.
"Most users" would notice a factor of 2, in transfer time, when copying a big file. And the ~6x you could get with a 4-bay NAS is very much the kind of order-of-magnitude difference you're talking about.

I think you just don't want to admit that your notion of hard disk speeds was outdated.

It's not like the performance difference when moving from a 100 megabit network to a gigabit network,
Basically nothing in tech is improving by 10x per generation, these days. If that's your standard for a worthwhile upgrade, you're either on a different planet, or at least in a different decade.

Something like a RAID 5 NAS can probably be considered a pretty niche setup as well,
No, that's like the mainstream of NAS.

Most are likely just interested in getting the full performance of their Internet connection, and don't care about much else.
Yeah, and it's already been established that we're not talking about them. Nobody disagrees that gigabit is fine for web usage, or even online gaming and streaming.

Anyway, if you don't consider 2x a worthwhile upgrade, I'll just keep that in mind for the next discussion about CPUs, GPUs, RAM, SSDs, etc.
 

bit_user

Polypheme
Ambassador
Apparently you have never heard of doing something else while that is going on....
As I said, it depends on your workflow. If you're just copying files in the background, maybe you don't care. However, if you need to do something else right after the data transfer completes, and it's not so big that it takes a really long time, then maybe you can't afford to do anything but wait.

If you are doing things like that you ARE NOT the normal home user.
Obviously not. But, I'm also a somewhat early adopter of the tech, which normal home users are not.

Anyway, this whole talk of people who just use wifi and watch netflix on their laptop is completely missing the point. This is not a forum aimed at those people. This is a forum aimed at PC enthusiasts, so our baseline user is someone with a desktop that they either built or at least upgraded.

It's cute of you guys to try and shift the baseline in your favor. I never said I wanted the grandma next door to have 10 Gig Ethernet. I'm talking about the upper end of the mainstream desktop & mainstream gaming laptop markets. Basically, the typical enthusiast. And what I want to see is for at least 2.5 Gbps to become the norm, on those platforms.

Not to mention that unless you have a RAID5, 10, 50 setup or you are using Datacenter Grade Helium drives, your network will barely be the bottleneck. The WD Black 6TB has an "internal transfer" speed of 227MB/sec, however, that is from the buffer to the drive. Once you get past the buffer the actual sequential performance will be closer to 150MB/sec +/- 25MB/sec.
I was just looking at a datasheet of 7200 RPM WD Gold HDDs, which pretty much all feature a max sustained transfer rate of 255 MB/sec. So, we're not talking about the DRAM buffer, but actually writing to the outer cylinders of the magnetic media. Even though it drops towards the center, the average sustained speed is still going to be above 200 MB/sec.

Again this is all dealing with sequential data and once it isn't sequential then performance will be far lower.
We're talking about sequential, because we're talking about big files. If you're running a good filesystem, most of your large file I/O is going to be sequential. With modern filesystems, you have to continually write at near-capacity, to encounter appreciable fragmentation.

Don't get me wrong I would love to have a 25GbE setup at my home with a large RAID 50 NAS.
Nah, I'm good with 10 Gig. I actually have two RAIDs - one small RAID-5 of SSDs and a slightly larger RAID-6 of HDDs. The latter doesn't stay up - I use it mostly for backups and other archival purposes. I wouldn't mind running it on 5 Gbps.
 
"Most users" would notice a factor of 2, in transfer time, when copying a big file. And the ~6x you could get with a 4-bay NAS is very much the kind of order-of-magnitude difference you're talking about.

I think you just don't want to admit that your notion of hard disk speeds was outdated.
I think you might be kind of missing the intended point of my posts. I'm not saying that no one would have a use for such hardware in a home network, just that the mainstream market doesn't seem to have a pressing need for it for the time being, and that the needs of the mainstream market will have a big impact on pricing and availability. Hence, why multi-gigabit ethernet is not standard on motherboards yet, and why there are no cheap 10 gigabit switches with lots of ports.

Again, only a relatively small minority of people are regularly transferring huge files across their home network, and even most of those are not likely having their workflow impeded in any significant way by network performance, so in most cases, the transfer speed probably isn't going to be a huge concern. 2x the performance might be fine if its something you are actively waiting for, but if it's a transfer happening in the background, you're probably not going to care that much. So networking hardware that fits such a usage scenario tends to be primarily aimed at businesses, making it more expensive. If manufacturers thought it would be profitable to make multi-gigabit ethernet standard for consumer-oriented hardware, and that it wouldn't cut into their profit margins for professional hardware, they likely would do so.
 
  • Like
Reactions: TJ Hooker
I was just looking at a datasheet of 7200 RPM WD Gold HDDs, which pretty much all feature a max sustained transfer rate of 255 MB/sec. So, we're not talking about the DRAM buffer, but actually writing to the outer cylinders of the magnetic media. Even though it drops towards the center, the average sustained speed is still going to be above 200 MB/sec.
The WD Gold is a helium filled datacenter drive based off of the HGST Ultrastar HDD. I went with the WD Black because it is their highest performance consumer drive. Unless you know stuff about HDDs, a lot of people will just put WD Blue's or Black's into their NAS rather than Red or Purple. With the Red, unless you get the Red Pro you have a 5400RPM drive and the Purple's only 8+TB are 7200RPM. Also the specifications for the Black specifically state the transfer rate is from the buffer. https://documents.westerndigital.co...sheet-wd-black-pc-hard-drives-2879-771434.pdf
 
I think you might be kind of missing the intended point of my posts. I'm not saying that no one would have a use for such hardware in a home network, just that the mainstream market doesn't seem to have a pressing need for it for the time being, and that the needs of the mainstream market will have a big impact on pricing and availability. Hence, why multi-gigabit ethernet is not standard on motherboards yet, and why there are no cheap 10 gigabit switches with lots of ports.

Again, only a relatively small minority of people are regularly transferring huge files across their home network, and even most of those are not likely having their workflow impeded in any significant way by network performance, so in most cases, the transfer speed probably isn't going to be a huge concern. 2x the performance might be fine if its something you are actively waiting for, but if it's a transfer happening in the background, you're probably not going to care that much. So networking hardware that fits such a usage scenario tends to be primarily aimed at businesses, making it more expensive. If manufacturers thought it would be profitable to make multi-gigabit ethernet standard for consumer-oriented hardware, and that it wouldn't cut into their profit margins for professional hardware, they likely would do so.
Don't forget that a lot of the people at home who can use that storage performance only need it on 1 computer. Your QNAP, Synnology, etc... consumer level 4+ disk NAS' allow them to be connected over USB 3.0 so that right there gets you a 5Gb/s connection.
 

TJ Hooker

Titan
Ambassador
Don't get me started on Wi-Fi and nas.

My nas is insanely slow for backups due to the slow link between my pc and the nas plugged into the modem.

Any benefit to wifi. Is welcome
If you're talking about a router with storage attached via a USB, rather than a dedicated NAS system, then your router may very well be the bottleneck rather than the network interface. I believe it's a result of the routers' CPUs being too weak to sustain good throughputs to the attached storage. Whatever the reason though, I remember looking up benchmarks and read/write speeds of attached storage can vary hugely from one router to the next, with everything else in the setup being constant and using a wired ethernet connection.
 
  • Like
Reactions: bit_user

spongiemaster

Admirable
Dec 12, 2019
2,276
1,280
7,560
I was just looking into this, and I'm pretty sure that not all of the Gold drives are helium-filled (in the past, only the higher-capacity drives were) and that not all of them are yet made by HGST (though some are, thankfully).

They're not, good luck trying to figure it out. In the original Gold series, any capacity 6TB and under is air filled. Any capacity above 6 TB is helium. Then WD killed the series and used the Ultrastar DC branding. With those, there are air filled and helium capacities from 6TB to 10TB, only air below and only helium above. WD brought back the Gold series late last year. It's safe to assume as before low capacity is air, no one is making a 1TB helium drive, and anything 12 TB and larger is helium, but who knows in between.
 
  • Like
Reactions: bit_user