Review Intel Core i9-10980XE Review: Intel Loses its Grip on HEDT

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I'm surprised nobody caught this from the second paragraph of the article.

Intel's price cuts come as a byproduct of AMD's third-gen Ryzen and Threadripper processors, with the former bringing HEDT-class levels of performance to mainstream 400- and 500-series motherboards, while the latter lineup is so powerful that Intel, for once, doesn't even have a response.

For twice? This is a recall of the olden days of the first-gen slot-A Athlon processors. Now I'm not well-versed in TomsHardware articles circa 1999, but this was not hard to find at all:

Coppermine's architecture is still based on the architecture of Pentium Pro. This architecture won't be good enough to catch up with Athlon. It will be very hard for Intel to get Coppermine to clock frequencies of 700 and above and the P6-architecture may not benefit too much from even higher core clocks anymore. Athlon however is already faster than a Pentium III at the same clock speed, which will hardly change with Coppermine, and Athlon is designed to go way higher than 600 MHz. This design screams for higher clock speeds! AMD is probably for the first time in the very situation that Intel used to enjoy for such a long time. AMD might already be able to supply Athlons at even higher clock rates right now (650 MHz is currently the fastest Athlon), but there is no reason to do so.

https://www.tomshardware.com/reviews/athlon-processor,121-16.html

Intel didn't have a response back then either.

Its the same story as the Athlon 64 days:


Intel falls behind, everyone proclaims doom and gloom for Intel then Intel comes back and the cycle repeats.
 
Ryzen 9 3950X16 / 323.5 / 4.76464Dual DDR4-3200105W

Is that a typo? 64 PCIe lanes on 3950x?



So, what are you saying. that a 280w CPU under full load will be cooler and use less power than a 165w CPU given that the system power draw for all other components will not change... Well except for motherboard maybe?

I'd really like to see some charts or formulas for calculating this stuff. At 25%, 50% load, 75% load and 100% load etc.

Even if that were true, AMD clearly don't want to break into the Australian market because the price for the change over to AMD is twice that of Intel as the prices stand at this minute. I'd almost be tempted to go AMD if I could be sure the power consumption wasn't going to set me back another $600 per year, and with ambient temps of 35-40c, being able to cool it over summer would be possible with my H150i Pro.
That is what the data says. Don't forget that Intel and AMD calculate their TDPs differently. Also if a task is easily parallel then the extra cores will make sure said task is done faster so while it might suck more just while working, it will be done far sooner so you end up using less overall power.
 

mac_angel

Distinguished
Mar 12, 2008
560
80
19,060
Depends on the workload you are talking about. I would say for more cases than not the thread ripper still beats the i9 for popular workstation tasks.
I was talking dollar to performance, or per core performance. There is a very big price difference. Yes, AMD has taken over the HEDT high end tier, but neither company have anything in the same price range as the other. I'd be a little curious about a couple of benchmarks with a core per core comparison. Most motherboards let you turn off cores. Core per core, Intel will probably win. Again, not an Intel fanboy, as I said, I go with what I can afford and get the best performance out of, as well as being able to use it for a while. I can't afford to upgrade for every new chip, I'm still running a 6850k, but I am hoping to upgrade to a Core i9 10940-X, which is just over $1100. I already have the motherboard given to me for free, it just needs the BIOS update. And I am hoping it will last me a few years, too, though I'm not so confident in that since it's still running PCIe3.
 
I was talking dollar to performance, or per core performance. There is a very big price difference. Yes, AMD has taken over the HEDT high end tier, but neither company have anything in the same price range as the other. I'd be a little curious about a couple of benchmarks with a core per core comparison. Most motherboards let you turn off cores. Core per core, Intel will probably win. Again, not an Intel fanboy, as I said, I go with what I can afford and get the best performance out of, as well as being able to use it for a while. I can't afford to upgrade for every new chip, I'm still running a 6850k, but I am hoping to upgrade to a Core i9 10940-X, which is just over $1100. I already have the motherboard given to me for free, it just needs the BIOS update. And I am hoping it will last me a few years, too, though I'm not so confident in that since it's still running PCIe3.

With a free board that makes it advantageous to go with it. Same if you got a new TR board.

As for PCIe 3.0, honestly it will still be some years before we truly see it being saturated by a single card, the only advantage PCIe 4.0 serves right now is in storage which can take advantage of the increased speed. However while faster transfer rates is nice one thing I have noticed is that IOPS are still similar and I have seen IOPS makes for a smoother experience than raw transfer rates does. Its why Intels top Optane PCIe SSD gets such good reviews.

As well we have Optane DIMMs to look forward to slowly trickle down and will eventually get to a better $/GB ratio which would make PCIe SSDs pretty pointless. If course thats probably well down the line and wont be majorly adopted until there is also an alternative that AMD can utilize.
 
With a free board that makes it advantageous to go with it. Same if you got a new TR board.

As for PCIe 3.0, honestly it will still be some years before we truly see it being saturated by a single card, the only advantage PCIe 4.0 serves right now is in storage which can take advantage of the increased speed. However while faster transfer rates is nice one thing I have noticed is that IOPS are still similar and I have seen IOPS makes for a smoother experience than raw transfer rates does. Its why Intels top Optane PCIe SSD gets such good reviews.

As well we have Optane DIMMs to look forward to slowly trickle down and will eventually get to a better $/GB ratio which would make PCIe SSDs pretty pointless. If course thats probably well down the line and wont be majorly adopted until there is also an alternative that AMD can utilize.
Networking can also take advantage of PCIe 4.0. You can run dual port 100Gbps at full speed on PCIe 4, you cannot do that on PCIe 3.
 

InvalidError

Titan
Moderator
Networking can also take advantage of PCIe 4.0. You can run dual port 100Gbps at full speed on PCIe 4, you cannot do that on PCIe 3.
You can do 100GbE on PCIe 3.0, all you need is a spare x16 slot, though you may be able to count on one hand the number of people in the world who may care to have 100GbE on an HEDT machine. This sort of speed is mostly found in HPC applications where higher bandwidth is necessary to reduce latency.
 
Networking can also take advantage of PCIe 4.0. You can run dual port 100Gbps at full speed on PCIe 4, you cannot do that on PCIe 3.

Except for servers and HPC, where is it useful? There is currently no consumer 100Gbps card out. Hell we still are waiting for 10Gbps saturation in the consumer markets.

Right now PCIe 4.0s only consumer advantage is storage. Saturation will not happen on PCie 3.0 for the mass majority for a few years and by then Intel will either be on 4.0 as well or we will see both on 5.0.
 
You can do 100GbE on PCIe 3.0, all you need is a spare x16 slot, though you may be able to count on one hand the number of people in the world who may care to have 100GbE on an HEDT machine. This sort of speed is mostly found in HPC applications where higher bandwidth is necessary to reduce latency.

Except for servers and HPC, where is it useful? There is currently no consumer 100Gbps card out. Hell we still are waiting for 10Gbps saturation in the consumer markets.

Right now PCIe 4.0s only consumer advantage is storage. Saturation will not happen on PCie 3.0 for the mass majority for a few years and by then Intel will either be on 4.0 as well or we will see both on 5.0.

Yes you need 2x PCIe 3 x16 ports and you run dual single port cards, but you really need 4 ports for good redundancy, but outside of servers and HPC 100GbE isn't needed. I was just throwing out another situation where PCIe 4 is useful.
 
I was talking dollar to performance, or per core performance. There is a very big price difference. Yes, AMD has taken over the HEDT high end tier, but neither company have anything in the same price range as the other. I'd be a little curious about a couple of benchmarks with a core per core comparison. Most motherboards let you turn off cores. Core per core, Intel will probably win. Again, not an Intel fanboy, as I said, I go with what I can afford and get the best performance out of, as well as being able to use it for a while. I can't afford to upgrade for every new chip, I'm still running a 6850k, but I am hoping to upgrade to a Core i9 10940-X, which is just over $1100. I already have the motherboard given to me for free, it just needs the BIOS update. And I am hoping it will last me a few years, too, though I'm not so confident in that since it's still running PCIe3.

I still think that for many workstation tasks, the new threadrippers still have a price to performance increase. Also, being the best of HEDT comes with a premium. Not many people knocked the i9-9980xe when it came out at close to 2000$, simply because it is the best of the best and you can charge a premium for it (just like Nvidia and the RTX-2080ti). At a certain point, a person's work time is worth more than the increased initial cost of the system they work with. At least that's the picture I've gotten from reviewers like Tom's, Gamer's Nexus, Hardware Unboxed, and several others.
 
I still think that for many workstation tasks, the new threadrippers still have a price to performance increase. Also, being the best of HEDT comes with a premium. Not many people knocked the i9-9980xe when it came out at close to 2000$, simply because it is the best of the best and you can charge a premium for it (just like Nvidia and the RTX-2080ti). At a certain point, a person's work time is worth more than the increased initial cost of the system they work with. At least that's the picture I've gotten from reviewers like Tom's, Gamer's Nexus, Hardware Unboxed, and several others.

That is true. If someone makes $120/hr doing work if even just 10% faster cuts an hour a week that alone will make up for the overall costs difference. Its why costs in workstation/HPC don't matter as much as they do in mainstream consumer areas.
 
  • Like
Reactions: TCA_ChinChin

atomicWAR

Glorious
Ambassador
4 InvalidError

I ran into this on anandtech...


...and while it never made it out of the early stages of development this is the kind of tech I was worried Intel was holding back do to lack of AMD pressure. For all we know this tech didn't scale/perform well or had other reasons for Intel to shelve it. Timing wise it could have been a pre-emptive idea to combat Ryzen. I suspect Intel had an idea what was coming their way. At the end of the day though without a competive AMD and Intel we won't see these big changes in CPU computing if one vendor becomes too dominant. I am glad to see the big changes continue to come. The CPU market got way to stale there for far to long. Remember when you needed a new CPU every 3 years for gaming? I don't miss the cost of those upgrades but I do miss the breakneck speed of innovation from those days.
 
4 InvalidError

I ran into this on anandtech...


...and while it never made it out of the early stages of development this is the kind of tech I was worried Intel was holding back do to lack of AMD pressure. For all we know this tech didn't scale/perform well or had other reasons for Intel to shelve it. Timing wise it could have been a pre-emptive idea to combat Ryzen. I suspect Intel had an idea what was coming their way. At the end of the day though without a competive AMD and Intel we won't see these big changes in CPU computing if one vendor becomes too dominant. I am glad to see the big changes continue to come. The CPU market got way to stale there for far to long. Remember when you needed a new CPU every 3 years for gaming? I don't miss the cost of those upgrades but I do miss the breakneck speed of innovation from those days.

That looks like an old design idea similar to Intels old Core 2. SInce its even based on LGA 1156, or so it looks, which was the successor to Core 2, the first Core i, I could make the assumption that they were toying with the idea of having a MCM design to allow for more cores as QPI would handle it better than the old FSB could.

However I doubt they have any plans to utilize this method anymore and instead would use Forevros which is better than MCM in many ways.
 

atomicWAR

Glorious
Ambassador
That looks like an old design idea similar to Intels old Core 2. SInce its even based on LGA 1156, or so it looks, which was the successor to Core 2, the first Core i, I could make the assumption that they were toying with the idea of having a MCM design to allow for more cores as QPI would handle it better than the old FSB could.

However I doubt they have any plans to utilize this method anymore and instead would use Forevros which is better than MCM in many ways.

I would agree 3D chip stacking/Foveros is where Intel is headed in the future but this goes to show they looked at alternatives in the past. Withouth a competitive AMD though Intel had little reason to push the core count higher until Ryzen started wiping the floor with Intel, particularly after the 3000 series launched...Anyways it was my whole point without a competitive CPU landscape we get stuck with incremental upgrades at best and companies get lax about giving the consumer more for their money. We need a strong AMD and Intel or will find ourselves in another stagnate CPU market like most of Intel's Core Series CPUs with 5-8% performance bumps gen on gen (Nehalem to Sandy-bridge was pretty big jump to be fair but besides that most were not).
 

InvalidError

Titan
Moderator
Withouth a competitive AMD though Intel had little reason to push the core count higher until Ryzen started wiping the floor with Intel
Regardless of what AMD is doing, ever larger chips on 14nm aren't economically sustainable (even more so while downward pricing pressure exists) and there isn't much Intel can do about that until it gets 10nm and beyond sorted out.

We need a strong AMD and Intel or will find ourselves in another stagnate CPU market like most of Intel's Core Series CPUs with 5-8% performance bumps gen on gen
Don't worry, we'll be down to sub-10% gains gen-on-gen for both Intel and AMD soon enough. A huge chunk of recent gains on both sides (Zen 2 and Ice Lake) come from increased AVX throughput. You can only make AVX-related structures so large before they become detrimental to other core functions, attainable clocks, latency and power draw.
 
Don't worry, we'll be down to sub-10% gains gen-on-gen for both Intel and AMD soon enough. A huge chunk of recent gains on both sides (Zen 2 and Ice Lake) come from increased AVX throughput. You can only make AVX-related structures so large before they become detrimental to other core functions, attainable clocks, latency and power draw.

From what I see, a lot of gains from both Intel and AMD (other than AMD switching to Zen from Bulldozer) is from increasing cache and changing cache structure. Is that what you mean by increased AVX throughput? I know that Intel also put out extensions that help with specific operations in AVX-512, but what other AVX-related structures have you seen? What kind of AVX specific improvements has AMD developed other than including things like AVX2 built into the architecture?
 

InvalidError

Titan
Moderator
What kind of AVX specific improvements has AMD developed other than including things like AVX2 built into the architecture?
Doubling the width of the connection between AVX execution units and cache for AMD, doubling the rate at which AVX units can accept instructions for Intel. You can't double the size of something that is already 256-512bits wide too many times before the impact on die area, power and clocks become unmanageable. Massive data-moving structures (ex.: compute-unit-wide buffers and registers in GPUs) to get data and results from wherever they are to everywhere they need to be next is why GPUs still only operate at ~2GHz.
 
  • Like
Reactions: TCA_ChinChin
Doubling the width of the connection between AVX execution units and cache for AMD, doubling the rate at which AVX units can accept instructions for Intel. You can't double the size of something that is already 256-512bits wide too many times before the impact on die area, power and clocks become unmanageable. Massive data-moving structures (ex.: compute-unit-wide buffers and registers in GPUs) to get data and results from wherever they are to everywhere they need to be next is why GPUs still only operate at ~2GHz.

For Intel, do you mean the AVX-512 capabilities that they have on their chips? Also, from what I've seen, it seems that both Intel and AMD are baking in extra instructions and enhancing current ones (exactly as what you've described) in order to improve performance. Of course both companies are in the business of making the fastest architectures possible, so I'm sure that once that threshold is reached, they will shift to other architecture details in order to improve performance.
 

InvalidError

Titan
Moderator
For Intel, do you mean the AVX-512 capabilities that they have on their chips?
Partly. Vector FMA instructions get unwieldy with high operand count so FMA units need multiple cycles to ingest them. Sunny Cove doubled the number of FMA resources to accommodate AVX-512, which has the knock-on benefits of enabling it to ingest the same unwieldy FMA operations in half as many clock ticks and spitting out results that many clock ticks sooner. Consequently, Ice Lake can be about twice as fast as Coffee Lake in FMA-heavy AVX loads despite its clock handicap.

Of course, you also have the other benefits from new instructions for workloads that benefit from those too.
 

atomicWAR

Glorious
Ambassador
Regardless of what AMD is doing, ever larger chips on 14nm aren't economically sustainable (even more so while downward pricing pressure exists) and there isn't much Intel can do about that until it gets 10nm and beyond sorted out.


Don't worry, we'll be down to sub-10% gains gen-on-gen for both Intel and AMD soon enough. A huge chunk of recent gains on both sides (Zen 2 and Ice Lake) come from increased AVX throughput. You can only make AVX-related structures so large before they become detrimental to other core functions, attainable clocks, latency and power draw.

On the first bit, yes absolutely when using monolithic dies but less true when going with chiplets. I posted an anandtech piece above which shows Intel was at least looking at the chiplet approach as recent as mainstream skylake (not including the old core 2 quads). Using 4 skylake dual core dies would have kept the costs down, yields solid, total die size down (under the assumption all dies do not have iGPUs) and increased the core count of mainstream CPUs all while remaining on the 14nm node. So again Intel clearly could have done more at 14nm but chose not to for what ever reason (performance, cost, assumed AMD would fumble Ryzen, etc as the list is pretty endless).

As for the sub 10% gains in the future. I hope this is not the case and 3d chip stacking works around this but as a physic's nerd I knew this time was coming for silicon. I can remember when the belief was 5nm was the last node silicon could do and while it looks like fabs will push past it we are clearly seeing the brick wall that is quantum tunneling approaching faster with each new node. Clock speeds are no longer increasing that much with die shrinks. One large reason we haven't seen Intel's 10nm hit prime time was do to clock regressions in Intel's 10nm. The IPC increase ice lake brought didn't make up nearly enough for what Intel lost in lower clocks. But that's shouldn't be new for anyone in here. My point is you are quite possibly right as long as we remain on silicon, maybe even if we don't. IPC's bumps have been shrinking for awhile and with clock speeds plateauing even regressing your prediction here is likely dead on if 3d chip stacking/MCM designs don't find a work around.
 
Last edited:

InvalidError

Titan
Moderator
I posted an anandtech piece above which shows Intel was at least looking at the chiplet approach as recent as mainstream skylake (not including the old core 2 quads).
Chiplets may improve yields but they also increase die area, power and latency overheads for chiplet-to-chiplet inteconnects that monolithic CPUs do not have. On-die bandwidth and low latency is much cheaper than chip-to-chip so you don't want to split a design unless absolutely necessary... and we are seeing the extreme application of this recently with wafer-scale CPUs. In benchmarks, Zen 2 has 15-20ns worse memory latency than Zen+ due to the CCD-IOD interface and routing, so AMD had to double the L3 cache size to reduce cache miss frequency. A pretty expensive work-around.

Everything has costs and require compromises.
 

atomicWAR

Glorious
Ambassador
^ agreed across the board. There are trade offs for nearly everything in CPU design. My only point was intel did have options...how good those options were IDK. Clearly the chips mentioned in the article were in very early design phases and I can say nothing of their power, or lack there of.
 

deepercool

Prominent
Dec 15, 2019
3
1
515
The introduction is somewhat confusing.

Due to the limitations of the 14nm manufacturing process, Intel simply doesn't have room to add more cores, let alone deal with the increased heat, within the same package.

However, two paragraphs down, we see this:

... but Intel's incessant iterations of the 14nm process have yielded higher overclocking potential, lower power consumption, and ...

Correct me if I'm wrong, but shouldn't lower power consumption result in lower heat dissipation and vice versa?
 

InvalidError

Titan
Moderator
Correct me if I'm wrong, but shouldn't lower power consumption result in lower heat dissipation and vice versa?
If you make the existing design more power-efficient but then use the efficiency gain to jack up clock frequencies (which typically means higher voltage and current) and increase core count (more current at the very least), the total power draw and heat dissipation still goes up.

Gets pretty difficult to keep power draw and heat under control for a given clock frequency and core count without die shrinks to reduce the amount of energy needed to switch FETs on/off.
 
  • Like
Reactions: deepercool