Review Intel Core i9-10900K Review: Ten Cores, 5.3 GHz, and Excessive Power Draw

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

PapaCrazy

Distinguished
Dec 28, 2011
311
95
18,890
Not exactley. The TDP went up again. Quite a bit. Check out other reviews on the i7. I was thinking these might be a good value if they used a thinner die and could get TDP closer to 150 than 9900k. Instead, the TDP now shot over 200w because they pushed clock speed again.
Yes the TDP went up, but my response was in regards to bang for your buck. Knowing where those 10th Gen CPUs perform and the cost gives you their bang for your buck.
 

PapaCrazy

Distinguished
Dec 28, 2011
311
95
18,890
Yes the TDP went up, but my response was in regards to bang for your buck. Knowing where those 10th Gen CPUs perform and the cost gives you their bang for your buck.

Yes, performance wise, but I think they are hitting a point where some people simply won't want to deal with that kind of TDP. The 9900k was already pushing big air coolers and even H-series loops. For people who insist on air cooling, or live in hot summer climates, 200W+ is hard to live with.
 
Two points: One, I don't think Intel is holding back at all. It ran into technological issues with 10nm that seriously delayed everything. Basically, it tried to do lots of 'cool' stuff at 10nm that came back to haunt it.

Second, AMD didn't actually have to do jack squat in terms of 7nm, relatively speaking. That's all on TSMC. Ditching its fabs years ago and letting GlobalFoundries deal with that headache has proven to be a great long-term move. AMD doesn't control its fabs, but at the same time it's not beholden to them either.

I have to wonder when Intel will finally have to stop trying to control its own fabs so tightly. I mean, it's currently selling most of the CPUs it makes supposedly, but opening up its fabs to other companies has already happened to a limited extent. The current situation looks bad, however, with TSMC already having shipped 7nm a year ago for complex parts, while Intel is still only doing 10nm on smaller chips (presumably because yields still suck). If that trend continues, Samsung, TSMC, hell maybe even SMIC might pull more than a generation ahead of Intel on fabs.

And let's be real: Moore's Law is no longer in effect, scaling is becoming far more difficult, and in the coming decade or two we're going to be hitting a massive brick wall. Intel might be better off divesting of its fabs and going fabless before that happens, just like AMD did. Or maybe not. Still, hard to imagine Intel's stranglehold on CPUs continuing if something major doesn't change for the company in the next five years (maybe sooner).

You and cryoburner are rigth about the 7nm node, is not thanks to any AMD effort, but then again tell me, Who tought and designed the Ryzen cpus and all the strategy and plans to get to have a 7nm chip by 2019 (if the technology was available)? AMD. And as far as I know AMD pay TSMC for the work, they dont get the chips for free, and they do so not knowing if costumers would buy thier chips.
So yeah AMD didnt do anything about the fabrication node itself (atleast not that we know), but they did a big part never the less.

As for your other point, "Intel ran technological issues with 10nm", you are right into pointing out the "oficial story".
I believe everyone is entitled to have thier own opinions. Intel too. Mine is, Intel didn't launch the 10nm desktop CPU because 1. those chips didn't do anything better than the very well tunned 14nm++++ node already available, 2. Money (why invest more in something thats not good enough to really beat the competition), and 3. Timming (waited too long to do it).
And thats fine, really, Why care?, Why change? when people still pay whatever you ask them for.
 
You and cryoburner are rigth about the 7nm node, is not thanks to any AMD effort, but then again tell me, Who tought and designed the Ryzen cpus and all the strategy and plans to get to have a 7nm chip by 2019 (if the technology was available)? AMD. And as far as I know AMD pay TSMC for the work, they dont get the chips for free, and they do so not knowing if costumers would buy thier chips.
So yeah AMD didnt do anything about the fabrication node itself (atleast not that we know), but they did a big part never the less.

As for your other point, "Intel ran technological issues with 10nm", you are right into pointing out the "oficial story".
I believe everyone is entitled to have thier own opinions. Intel too. Mine is, Intel didn't launch the 10nm desktop CPU because 1. those chips didn't do anything better than the very well tunned 14nm++++ node already available, 2. Money (why invest more in something thats not good enough to really beat the competition), and 3. Timming (waited too long to do it).
And thats fine, really, Why care?, Why change? when people still pay whatever you ask them for.
I'd argue that Apple and some other people using TSMC 7nm are pushing the company much harder than AMD. How many 7nm Apple chips has TSMC fabbed -- how many wafer starts? -- compared to AMD CPU + GPU wafers? And there are other smartphone companies fighting for wafers as well, plus Nvidia.

AMD is fully responsible for the design of Zen 2 architecture (and Zen+ and Zen before it), but the node is the responsibility of a separate company, which frees up a lot of resources AMD would otherwise need to devote to fabs. It was failing, badly, before divesting itself of the foundry side of the company.

Regarding Intel, the whole point with not doing larger 10nm chips goes back to yields. There are highly credible sources suggesting yields on Cannon Lake were under 10%, and it wasn't even a big chip. That was first gen 10nm. Intel walked back some things, fixed others, etc. to get 10nm+, but I don't think yields are all that great even now. Where 14nm++ is probably getting 90-95% yields of usable chips (after harvesting), I think a decent chunk of 10nm Ice Lake are still non-functional.

But, Intel will do Ice Lake for servers in the coming months, suggesting it has at least somewhat overcome the yield and size issues. On the other hand, Ice Lake server chips typically sell for thousands of dollars, so it can offset lower yields with the astronomical prices. Meanwhile, Rocket Lake is still going to be 14nm for consumer desktops, and we'll need to wait for some variant of Tiger Lake before we get a 10nm mainstream desktop chip.
 
I'd argue that Apple and some other people using TSMC 7nm are pushing the company much harder than AMD. How many 7nm Apple chips has TSMC fabbed -- how many wafer starts? -- compared to AMD CPU + GPU wafers? And there are other smartphone companies fighting for wafers as well, plus Nvidia.

AMD is fully responsible for the design of Zen 2 architecture (and Zen+ and Zen before it), but the node is the responsibility of a separate company, which frees up a lot of resources AMD would otherwise need to devote to fabs. It was failing, badly, before divesting itself of the foundry side of the company.

Regarding Intel, the whole point with not doing larger 10nm chips goes back to yields. There are highly credible sources suggesting yields on Cannon Lake were under 10%, and it wasn't even a big chip. That was first gen 10nm. Intel walked back some things, fixed others, etc. to get 10nm+, but I don't think yields are all that great even now. Where 14nm++ is probably getting 90-95% yields of usable chips (after harvesting), I think a decent chunk of 10nm Ice Lake are still non-functional.

But, Intel will do Ice Lake for servers in the coming months, suggesting it has at least somewhat overcome the yield and size issues. On the other hand, Ice Lake server chips typically sell for thousands of dollars, so it can offset lower yields with the astronomical prices. Meanwhile, Rocket Lake is still going to be 14nm for consumer desktops, and we'll need to wait for some variant of Tiger Lake before we get a 10nm mainstream desktop chip.

Thanks for writing that. It does indeed explain what AMD did to be where it is now (the strategy I tried to write about it), and what Intel tried to do (later than sooner) to a lesser success.

Sorry if I can not explain myself too well, english is not my natural language and other than movies, music and websites like yours and youtube reviewers I don't have many ways to practice.

I guess only time will tell.

I would still hope to see a new dawn for Intel desktop cpus. Sadly and no matter what roadmaps shows, I would still think (till I see otherwise) that it wont be a 10nm one, unless they don't care about PC (MAC not included) desktop CPUs anymore.
 

saunupe1911

Distinguished
Apr 17, 2016
212
76
18,660
Welp looks like I will be keeping my 6700K @ 4.8Mhz a little longer until Zen 3 drops...or we get a proven Motherboard/Water Cooler/ combo that can really keep 5.1Mhz stable and cool for the long haul. Let's see what the vendors come up with.
 

st379

Distinguished
Aug 24, 2013
169
69
18,660
I would still hope to see a new dawn for Intel desktop cpus. Sadly and no matter what roadmaps shows, I would still think (till I see otherwise) that it wont be a 10nm one, unless they don't care about PC (MAC not included) desktop CPUs anymore.
According to the latest roadmap, 7nm euv desktop cpu is coming in q1 2022 for Intel.
Until than they will suffer from zen 3 but it will only last for a year.
Not like the "brilliant" bulldozer that lasted 6 years.
 
According to the latest roadmap, 7nm euv desktop cpu is coming in q1 2022 for Intel.
Until than they will suffer from zen 3 but it will only last for a year.
Not like the "brilliant" bulldozer that lasted 6 years.
Technically, Intel has been "suffering" with its lack 10nm desktop and server parts since about 2017 when Cannon Lake was supposed to originally launch. That was pushed back to 2018, which Intel barely hit with a basically fake launch before eventually aborting on Cannon Lake altogether. It's a good thing Intel had a 2-3 year lead over AMD at the time. Everything since Kaby Lake has basically been a stopgap filler solution while Intel tries to figure out the next step. Laptops have 10nm now at least, but it's still not the high-end solution Intel really wants to have out there. But Intel went from a 2-3 year lead on process tech to now being 1-2 years behind.
 
  • Like
Reactions: saunupe1911

st379

Distinguished
Aug 24, 2013
169
69
18,660
Technically, Intel has been "suffering" with its lack 10nm desktop and server parts since about 2017 when Cannon Lake was supposed to originally launch. That was pushed back to 2018, which Intel barely hit with a basically fake launch before eventually aborting on Cannon Lake altogether. It's a good thing Intel had a 2-3 year lead over AMD at the time. Everything since Kaby Lake has basically been a stopgap filler solution while Intel tries to figure out the next step. Laptops have 10nm now at least, but it's still not the high-end solution Intel really wants to have out there. But Intel went from a 2-3 year lead on process tech to now being 1-2 years behind.
The latest benchmarks that i saw Intel is not doing very well against ryzen 4000 (zen 2) in notebooks .
If we ignore the fact that they have 85% marketshare :).
Their reputation will hold until 7nm launch unless they won't deliever in 2022. Than I think they will need to give some more "help" for Dell to sell their cpu.
 
The latest benchmarks that i saw Intel is not doing very well against ryzen 4000 (zen 2) in notebooks .
If we ignore the fact that they have 85% marketshare :).
Their reputation will hold until 7nm launch unless they won't deliever in 2022. Than I think they will need to give some more "help" for Dell to sell their cpu.
For sure, which is why I put "suffering" in quotes. Intel was clearly behind AMD in CPU performance from 2003-2006 and still kept most of its market share. Pentium 4 and NetBurst were a far bigger problem back in the day. It will take a lot more than the 10nm issues to actually give AMD the overall lead in market share.
 

st379

Distinguished
Aug 24, 2013
169
69
18,660
For sure, which is why I put "suffering" in quotes. Intel was clearly behind AMD in CPU performance from 2003-2006 and still kept most of its market share. Pentium 4 and NetBurst were a far bigger problem back in the day. It will take a lot more than the 10nm issues to actually give AMD the overall lead in market share.
People these days search information from the internet.
In those days I remember that I had less than 100kbps. Today the internet is vastly used from all ages. 7mbps is not exactly the 30 kbps that I had on certain hours.
This time they can't afford to be behind for so many years. People will spread the word a lot quicker.
Look at amazon best selling and many other sites. Amd is on top of the best selling cpu.
Unfortunely I can't find a single notebook made by Amd. Even in the US( I don't live in the US) they are out of stock. I don't think they have enough wafers to compete.
On paper they have ryzen 4800h/4700h/4600h but you can't find them or they are out of stock in many countries.
 

spongiemaster

Admirable
Dec 12, 2019
2,362
1,350
7,560
Yes, performance wise, but I think they are hitting a point where some people simply won't want to deal with that kind of TDP. The 9900k was already pushing big air coolers and even H-series loops. For people who insist on air cooling, or live in hot summer climates, 200W+ is hard to live with.

The primary target for these chips is gaming. You're not going to be drawing anywhere near 200W in gaming. If you're building a render box that is going to be pounding the CPU hours at a time, then the 10 series shouldn't be your first choice.
 
  • Like
Reactions: Gurg

Landsharkk

Distinguished
Jul 13, 2002
26
0
18,530
Hey, can someone help me better understand the power consumption numbers (TDP, etc)?

I currently have a i7-3930k with a TDP of 130w. From the above review, the i9-10900k has a TDP of 125w. Does this mean my current CPU is actually more power hungry than the new 10900k?

I've seen power consumption benchmarks for the 3930k showing 360w max power draw (but not sure if this is whole system draw or just the cpu draw....and what does the review above mean for power draw? Is that whole system draw or just cpu draw?).

Any help on understand power numbers and comparisons with the 3930k would be awesome! Thank you.
 

Giroro

Splendid
"An air cooler could be somewhat feasible for the chip, provided you select an expensive and rather bulky model."

I can't wait to see how these perform when system integrators and OEMs throw the cheapest possible RGB fans at it.
 
  • Like
Reactions: Soaptrail

escksu

Reputable
BANNED
Aug 8, 2019
877
353
5,260
7 fps more on average than 9th gen.... very impressive:cautious:.

At least we have rdna 2, ampere and ryzen 4000 later this year. There will be some very exciting products later this year.

Its very good considering 10900K is just 9900K with 2 more cores. Intel say blah blah tweak but pretty much the same thing.

As for Ryzen 4000, expect the same thing. i.e. its not going to be a massive jump. 10-15% improvement.
 

escksu

Reputable
BANNED
Aug 8, 2019
877
353
5,260
9900K will serve me well for few more years. Will be looking for upgrades once DDR5 and PCIe 5.0 hit the market.
Hoping for some crazy stuff from AMD and Intel.

Well, I fully agreed. Don't bother about the upcoming Ryzen and 10900.. 10900K is pretty much same as yours except for 2 more cores (those extra "enhancements"...don't bother).

Although new Ryzen will be faster, its not significant as well. I call it slant grade.

So, I would say wait for AMD/Intel to release DDR5 CPUs and boards. Then it will be really an upgrade.
 

escksu

Reputable
BANNED
Aug 8, 2019
877
353
5,260
"An air cooler could be somewhat feasible for the chip, provided you select an expensive and rather bulky model."

I can't wait to see how these perform when system integrators and OEMs throw the cheapest possible RGB fans at it.

Cost of fan isn't the problem. Because OEM fans are really really cheap. ITs just its being sold to end users at ridiculous price. Also, airflow isn't the problem for fans, its noise.

If you look at most of the RGB fans in the market, they are very low power fans (meant to move decent air while being quiet). You don't see likes of Delta focus flow on sale. OEMs will be using powerful fans but because the speed is controlled, its still quiet at low loads.
 

PCWarrior

Distinguished
May 20, 2013
216
101
18,770
Regarding temperatures

Here is another perspective...

Power limits removed and 5.2GHz OC results:
(Optimum tech)
OpqRp96.jpg


Power limits removed and 5.2GHz OC results (kitguru)
Temperature-Page.png


Pure Stock And Power Limits Removed results (Linus tech tips)
jVCreks.jpg


Pure stock results (Paul's Hardware):
sHaF50I.png
 

escksu

Reputable
BANNED
Aug 8, 2019
877
353
5,260
Btw, I am wonder if temps will be improved significantly with custom loop instead.

I feel that AIO won't work for these CPUs regarding of radiator size. This is because the water flow rate is too low due to restrictive water channels and tubing.

I don't have 9900K nor 10900K but I did try AIO before with FX9590 before. It doesn't work. The water coming out from the waterblock is hot!! Installed a custom loop and its much better.
 

InvalidError

Titan
Moderator
So, I would say wait for AMD/Intel to release DDR5 CPUs and boards. Then it will be really an upgrade.
On the first generation of CPUs after new memory gets introduced and boards are made for both standards during the transition period, there usually aren't any benefits to picking new-gen memory over old-gen as the new memory's performance is saddled with increased latency from reduced operating voltage. You need to wait for 2nd-gen CPUs and the DRAM to mature enough to close the latency gap with old-gen before new-gen memory becomes clearly superior.

For the first year or two, DDR5 will be at a significant premium over DDR4, so I wouldn't be surprised if people decided to stick to DDR4 boards for their next-gen socket in late-2021/2022.
 
  • Like
Reactions: Phaaze88

escksu

Reputable
BANNED
Aug 8, 2019
877
353
5,260
On the first generation of CPUs after new memory gets introduced and boards are made for both standards during the transition period, there usually aren't any benefits to picking new-gen memory over old-gen as the new memory's performance is saddled with increased latency from reduced operating voltage. You need to wait for 2nd-gen CPUs and the DRAM to mature enough to close the latency gap with old-gen before new-gen memory becomes clearly superior.

For the first year or two, DDR5 will be at a significant premium over DDR4, so I wouldn't be surprised if people decided to stick to DDR4 boards for their next-gen socket in late-2021/2022.

I would say Yes to Intel but No to AMD. Reason is that Intel CPUs traditionally do not benefit from much faster RAM with higher latency.

But AMD on the other hand, benefits greatly. from faster RAM, even with higher latency.

So, DDR5 may be a game changer for AMD. Of course, its still quite sometime away before we see DDR5. But I don't think there is any problem keeping 9900K till 21/22.
 
  • Like
Reactions: Soaptrail

Giroro

Splendid
Cost of fan isn't the problem. Because OEM fans are really really cheap. ITs just its being sold to end users at ridiculous price. Also, airflow isn't the problem for fans, its noise.

If you look at most of the RGB fans in the market, they are very low power fans (meant to move decent air while being quiet). You don't see likes of Delta focus flow on sale. OEMs will be using powerful fans but because the speed is controlled, its still quiet at low loads.

I mean i'm curious about the the poorly-balanced form-over-function offbrand "gaming" PCs that pop up at walmart and best buy. I'm sure your Alienwares and Omens will have their own custom solutions that work fine and probably don't look like much
 

InvalidError

Titan
Moderator
But AMD on the other hand, benefits greatly. from faster RAM, even with higher latency.
That is mainly because of the infinity fabric clock being tied to memory so Zen with its multiple CCXes penalizes AMD's CPUs multiple times whenever anything goes off-CCX. Zen 3 eliminates one layer of IF by having only one CCX per CCD instead of two with all cores and L3 in it. If you look at the 3100 vs 3300X, all cores and L3 cache in a single CCX is an advantage of upwards of 20% from eliminating the far-core/L3 latency penalty

I suspect infinity fabric clock won't be anywhere near as important for single-CCD Zen 3 CPUs and it'll probably get better still for Zen 4.
 

mcgge1360

Reputable
Oct 3, 2017
116
3
4,685
7 fps more on average than 9th gen.... very impressive:cautious:.
Even if you got a 10GHz 20 core CPU that only pulled 25w with the same IPC you wouldn't get a crazy higher amount of FPS more. When the CPU isn't the bottleneck you can't measure it like it is.