Review Intel Core i9-10900K Review: Ten Cores, 5.3 GHz, and Excessive Power Draw

st379

Honorable
Aug 24, 2013
75
25
10,560
0
7 fps more on average than 9th gen.... very impressive:cautious:.

At least we have rdna 2, ampere and ryzen 4000 later this year. There will be some very exciting products later this year.
 
Reactions: Makaveli and RodroX

dave.jeffers

Prominent
Nov 1, 2018
8
3
515
0
Even if you want the 10900k, they are saying its a paper launch. Will be interesting to see how long it takes for NE and Amazon to get stock.
 
Apr 24, 2020
5
4
15
0
This talks about fully-patched systems. Is this considered to be a fully patched system or can we expect a host of patches for SPECTRE etc. to slow everything down when they are eventually released.

I expected more discussion of what Intel might have done to reduce the endless list of security threats that have been uncovered in the last few years.
 

King_V

Distinguished
Wait for it. The intel shills are coming to this thread soon to wail us their one lone song
Well, I mean, it does, after all, offer the biggest lead in terms of frames/second than the top AMD processor. With a 2080Ti. Um... at 1080p. At frame rates where the human eye is incapable of perceiving the difference.

But, think of it this way, at lower resolutions (1600x900, 1280x720), the frame rate lead would be EVEN MOAR!!
 
"The heavily-threaded y-cruncher benchmark, which computes pi using the taxing AVX instruction set, reveals what we consider to be erroneous test results based upon our previous experience with Intel chips based on the never-ending Skylake architecture."
In Anandtech's review we see the same behavior for the 10900k. https://www.anandtech.com/show/15785/the-intel-comet-lake-review-skylake-we-go-again/6 Ian's said "y-Cruncher is another one where the Core i9 performs worse than the Core i7 in the multithreaded test, despite being better on the single threaded test. We again put this down to memory bandwidth. We need to update this test to the latest version of y-Cruncher, which has additional optimizations for Zen 2 processors, but also to increase the digit count in our MT test. "
 

dimar

Distinguished
Mar 30, 2009
895
1
18,985
0
9900K will serve me well for few more years. Will be looking for upgrades once DDR5 and PCIe 5.0 hit the market.
Hoping for some crazy stuff from AMD and Intel.
 
I don't see why they are pushing this as a "gaming" processor, seeing as today's games don't show much difference between even 6 cores with 12 threads compared to 8 cores with 16, let alone 10 cores with 20 threads. I guess it's because at the kinds of heavily multithreaded tasks you would normally get a higher core-count processor for, the 10900K still tends to underperform compared to the less-expensive competition, while drawing close to double the power and putting out significantly more heat.

So, gamers might be considered a bit more tolerant of that to get those slightly higher clock rates, even if the extra cores they are paying for will be sitting around mostly unused. Even those interested in top gaming performance at low resolutions with a high-end graphics card should probably ignore this chip, and if they want to go Intel, look to the Comet Lake i5s or i7s instead. I suspect those should be able to clock just as high.
 

RodroX

Commendable
Aug 4, 2019
1,177
331
1,340
53
So what I take from this review is:
  1. If you need the best bang for the buck gaming CPU (atleast till the reviews of the Core i7 10700K are out and show otherwise) the Ryzen 7 3700X is stll the king.
  2. If you want the best gaming CPU hands down (and don't want to spend a huge amount of money on a complicated and very expensive water loop to keep a CPU under 100°C) then the Intel Core i9 9900K and the soon to be seen Core i7 10700K are your best choices.
  3. If you can't spend the money for the above two, then wait for the i5 10600K review, and don't forget about the Ryzen5 3600 and 3600X which are more than capable CPU for any GPU today.
  4. If you need a workstation CPU in a tight budget then then you have the Ryzen 9 3900X, then the Ryzen 9 3950X, and if you can spend a more nothing beats the new Threadripper 3xxx series.
I feel Intel have pushed the 14nm+++++ node the best they could and thats to some awesome levels of performance, and I do admit it AMD have the advantage of the smaller proccess, but lets keep in mind Intel had a lot of time and resources (more than AMD for sure) to work and reduce the proccess and all they did was come with one excuse after another.
Also remember AMD, with its low resources and really tiny market share was able to go from 14nm to 7nm in less than 3 years, while intel has been stuck with the 14nm since 2015 (fixed, thanks to jeremyj_83 comment below).

Anwyays, I guess we will have to wait for 10nm, or whatever Intel is holding back.

Im very happy with my experience with the Core i5 3570, it served me well for many years (and still does everynow and then when I need to check components), but I believe I will have to wait more time to have a reason to go back to Intel, specially with Zen 3 around the corner.
 
Last edited:
So what I take from this review is:
  1. If you need the best bang for the buck gaming CPU (atleast till the reviews of the Core i7 10700K are out and show otherwise) the Ryzen 7 3700X is stll the king.
  2. If you want the best gaming CPU hands down (and don't want to spend a huge amount of money on a complicated and very expensive water loop to keep a CPU under 100°C) then the Intel Core i9 9900K and the soon to be seen Core i7 10700K are your best choices.
  3. If you can't spend the money for the above two, then wait for the i5 10600K review, and don't forget about the Ryzen5 3600 and 3600X which are more than capable CPU for any GPU today.
  4. If you need a workstation CPU in a tight budget then then you have the Ryzen 9 3900X, then the Ryzen 9 3950X, and if you can spend a more nothing beats the new Threadripper 3xxx series.
I feel Intel have pushed the 14nm+++++ node the best they could and thats to some awesome levels of performance, and I do admit it AMD have the advantage of the smaller proccess, but lets keep in mind Intel had a lot of time and resources (more than AMD for sure) to work and reduce the proccess and all they did was come with one excuse after another.
Also remember AMD, with its low resources and really tiny market share was able to go from 14nm to 7nm in only 3 and a haf years, while intel has been stuck with the 14nm since 2015.

Anwyays, I guess we will have to wait for 10nm, or whatever Intel is holding back.

Im very happy with my experience with the Core i5 3570, it served me well for many years (and still does everynow and then when I need to check components), but I believe I will have to wait more time to have a reason to go back to Intel, specially with Zen 3 around the corner.
Ryzen 1800X was released on 14/16nm in February 2017 and Ryzen 3700X was released on 7nm in July 2019. For the CPU line it only took them 2.5 years to move up nodes. On the GPU side the RX480 was released on 16nm in July 2016 and Vega VII was released on 7nm in February 2019, again only 2.5ish years.
 
Reactions: Makaveli

RodroX

Commendable
Aug 4, 2019
1,177
331
1,340
53
Ryzen 1800X was released on 14/16nm in February 2017 and Ryzen 3700X was released on 7nm in July 2019. For the CPU line it only took them 2.5 years to move up nodes. On the GPU side the RX480 was released on 16nm in July 2016 and Vega VII was released on 7nm in February 2019, again only 2.5ish years.
lol thanks for the correction, I guess is not the best idea to write on a forum while trying to work at the same time!
 

st379

Honorable
Aug 24, 2013
75
25
10,560
0
Paul thank you for the review.
I really like the last page where you average all the productivity and gaming tests.
I hope you keep on doing it in future benchmarks.
 
Last edited:
Also remember AMD, with its low resources and really tiny market share was able to go from 14nm to 7nm in only 3 and a half years, while intel has been stuck with the 14nm since 2015.
Well, technically TSMC did, at least on the manufacturing side of things. AMD used to make their own chips, but spun off their chip manufacturing business to become GlobalFoundries a decade ago, which is now an entirely separate company. And GlobalFoundries cancelled their 7nm plans a couple years back, though they do still produce the 12nm IO chip used in Ryzen 3000 processors.
 
Reactions: RodroX

Phaaze88

Splendid
Ambassador
WHOO! Hot diggity!
Come on, ya'll can't see the beauty of this AWESOME cpu?!
My in game fps is going to be SO high now! Even lower INPUT LAG, and I get to use a 360mm AIO to boot!
Man, my e-peen is going to be so HUGH... can't touch this, BRUH!

At frame rates where the human eye is incapable of perceiving the difference.
Yo, get out of here with yo placebo effect nonsense! You can't tell the difference between 300 and 320fps? BRUH.

But, think of it this way, at lower resolutions (1600x900, 1280x720), the frame rate lead would be EVEN MOAR!!
So many UNENLIGHTENED in this thread who don't understand... :pfff:
Even more fps, even lower input lag, for just short of 1000USD?! Imma be Diamond level in no time! Come on, open ya'lls EYES, BRUH!

10900K or nothing, yo! Anyone settling for less is wack!
Then there's the 11900K coming out later on the same socket?! WUT.
All hail Intel, BRUH.




end sarcasm
 

Integr8d

Distinguished
May 28, 2011
68
3
18,635
0
Great. Now run all these tests with a cooler designed to dissipate 125w thermals...

Intel: “Whoa whoa whoa. These 125w aren’t exactly ya‘ grandma’s 125w. You want something a little bigger.”

*TH grabs a 280mm AIO

Intel: “Umm. So 250w isn’t exactly where it stops either. You know? Bigger’s better, right?”

*TH starts bending tubing, placing 80mm rads.

Intel: “Hey hey. I gotta better idea... Joe... JOE!!!! Where’d ya put the LN2?... No... I think you left it in the trunk... IN THE TRUNK!!!”

*Hands the LN2 to TH.

Intel (sweating profusely): “Okay. Now we can run some bench‘s.”
 
Reactions: Don Frenser

JarredWaltonGPU

Senior GPU Editor
Editor
Feb 21, 2020
228
186
260
0
This talks about fully-patched systems. Is this considered to be a fully patched system or can we expect a host of patches for SPECTRE etc. to slow everything down when they are eventually released.

I expected more discussion of what Intel might have done to reduce the endless list of security threats that have been uncovered in the last few years.
Reality is that very little has been done in hardware to improve / mitigate the performance impact. Intel will need a much bigger architectural update to address this, should it choose to even do so. Paul tests with the latest Windows 10 build (well, latest as of a few months back maybe?) and BIOS releases, so all patches should be active. In my testing of CPUs prior to coming to Tom's Hardware, I found the biggest impact on performance was in stuff like random IO with SSD benchmarks. But then, I didn't do browser tests.
 

JarredWaltonGPU

Senior GPU Editor
Editor
Feb 21, 2020
228
186
260
0
Also remember AMD, with its low resources and really tiny market share was able to go from 14nm to 7nm in less than 3 years, while intel has been stuck with the 14nm since 2015.

Anwyays, I guess we will have to wait for 10nm, or whatever Intel is holding back.
Two points: One, I don't think Intel is holding back at all. It ran into technological issues with 10nm that seriously delayed everything. Basically, it tried to do lots of 'cool' stuff at 10nm that came back to haunt it.

Second, AMD didn't actually have to do jack squat in terms of 7nm, relatively speaking. That's all on TSMC. Ditching its fabs years ago and letting GlobalFoundries deal with that headache has proven to be a great long-term move. AMD doesn't control its fabs, but at the same time it's not beholden to them either.

I have to wonder when Intel will finally have to stop trying to control its own fabs so tightly. I mean, it's currently selling most of the CPUs it makes supposedly, but opening up its fabs to other companies has already happened to a limited extent. The current situation looks bad, however, with TSMC already having shipped 7nm a year ago for complex parts, while Intel is still only doing 10nm on smaller chips (presumably because yields still suck). If that trend continues, Samsung, TSMC, hell maybe even SMIC might pull more than a generation ahead of Intel on fabs.

And let's be real: Moore's Law is no longer in effect, scaling is becoming far more difficult, and in the coming decade or two we're going to be hitting a massive brick wall. Intel might be better off divesting of its fabs and going fabless before that happens, just like AMD did. Or maybe not. Still, hard to imagine Intel's stranglehold on CPUs continuing if something major doesn't change for the company in the next five years (maybe sooner).
 

ASK THE COMMUNITY

TRENDING THREADS