News Early Intel 10th Gen Comet Lake Pricing Shines Light on Potential AMD Rivals

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
It's nice that you only focused on the first part but I had the second part there because it has meaning.
Also the comparison I make is between the 9900k and the 3700x both of which have 8c/16t.

At default settings with load on all 16 threads the 9900k only boost to about 200w for a short period of time and then drops down to about 150 for the remainder of the time,it's a very very different thing from a constant stress test.


But ok sure, go ahead and overclock ZEN2 to 4.7Ghz all core run prime on it and prove to me that it will draw less power....
As I have said on a different thread, extremetech is the only publication I have read that shows this. Toms, Anandtech, KitGuru, etc... all show that the 9900k draws MORE power even during extended tests. A lot of publications don't use Prime 95 anymore for power tests because it acts like a Power Virus anyways. Therefore a good conclusion from the extremetech numbers would be that the 9900k is actually thermal throttling because it is drawing so much power.

Direct from Tomshardware review of the 3700x "We're moving away from using AVX-based stress tests for our CPU power testing, though we will continue to use them for their intended purpose of validating overclocks. AVX-based stress testing utilities essentially act as a power virus that fully saturates the processor in a way that it will rarely, if ever, be used by a real application. Those utilities are useful for testing power delivery subsystems on motherboards, or to generate intense thermal loads for case testing, but they don't provide a performance measurement that can be used to quantify efficiency. "

https://www.anandtech.com/show/14605/the-and-ryzen-3700x-3900x-review-raising-the-bar/19

Heck even the 3970X uses less power during operation than the i9-10980XE even though it has a much higher TDP and 14 more cores.
 
  • Like
Reactions: TCA_ChinChin
...
At default settings with load on all 16 threads the 9900k only boost to about 200w for a short period of time and then drops down to about 150 for the remainder of the time,it's a very very different thing from a constant stress test.
You mean like thermal throttling? What reason would it have to lower its clocks? Intel just plain doesn't want to give you full performance after 2 minutes "just because"? Or is it to keep it from overheating?

But ok sure, go ahead and overclock ZEN2 to 4.7Ghz all core run prime on it and prove to me that it will draw less power....

Why would I measure power draw at "clock for clock" when zen2 has higher IPC and is made to clock a fair bit lower? ... unless I was just trying to make Intel look better than it is, is the only reason I can think of as to why I might do that.

Intelligent comparisons would be "stock vs stock", "best OC vs best OC (for those who care)", or power / per performance -- all of which reviewers have done at some point, and results of which clearly shows zen2 consumes less power and has better temps under full loads.

So back to my original comment ... since the 9900k is already a very hot chip, as noted by reviewers, including Tom's review from where I quoted that air cooling would be "absurd" to use, why would adding two extra cores, higher clocks on the same node, not make even a far hotter chip?

What sort of magic is going to keep the 10900k from being "hot as Hades" under a full load? Unicorn poo? Come back to reality land ... we're losing you ... 😉
 
Last edited:
  • Like
Reactions: TCA_ChinChin
So back to my original comment ... since the 9900k is already a very hot chip, as noted by reviewers, including Tom's review from where I quoted that air cooling would be "absurd" to use, why would adding two extra cores, higher clocks on the same node, not make even a far hotter chip?

What sort of magic is going to keep the 10900k from being "hot as Hades" under a full load? Unicorn poo? Come back to reality land ... we're losing you ... 😉

Could be more node tweaks or just binning. Look at the 9900KS:

HZgrLJfSXw7WWcDr9qpBJQ-650-80.png


When overclocked, it ran 200MHz faster than the 9900k, yet used 10W less. Adding 2 more cores is going to make things much more difficult, but if Intel can get the 10900k's power profile in line with what they did with the 9900ks, it might not be that much beyond the 9900k. If Intel just slapped on 2 more cores to a 9900k and did nothing else, things will probably get ugly, or smokey.
 
Will be interesting for certain. Especially with new ryzen CPUs not far behind either. A good value call will be when ryzen 3000 chips drop in price. Imagine a 3600 for 120 bucks like the 2600 is now. Justifying those new i5 CPUs might prove difficult at 200.
 
...
When overclocked, it ran 200MHz faster than the 9900k, yet used 10W less. Adding 2 more cores is going to make things much more difficult, but if Intel can get the 10900k's power profile in line with what they did with the 9900ks, it might not be that much beyond the 9900k. If Intel just slapped on 2 more cores to a 9900k and did nothing else, things will probably get ugly, or smokey.

What might these "node tweaks" be that you speak of? You mean architecture tweaks?


While I was surprised at the 9900ks power consumption, they really just scraped the very top cream of the 9900k top for those - literally - its the exact same chip. Tech reviewer BitWit had a better bin with his old 9900k then he did with his 9900ks -- the 9900k consumed less power at 5.1 than his 9900KS did. This means that some early 9900K where actually better binned , and Intel just separated out two tiers of 9900k and called all the crappy ones 9900k and the better ones 9900ks - so if you buy a 9900k today, it will be almost guaranteed to not be as good as a 9900k purchases a year ago. A few reviewers confirmed this with their findings. Binning tighter is going to tricky ...

"aye givener all she got cap'n, an she canna be milked no more!"

The 9900KS is already a real tight bin, and as I mentioned, there's only so much the laws of physics can be thwarted without resorting to unicorn poo, which doesn't exist.

The 10900k will very most likely be the hottest and most power hungry desktop chip ever made, save for a couple skus in HEDT format with more than 3x the cores ... I'm not tryingt o make an emotional argument, this really is extremely likely.
 
Intel's 14nm is running cooler than zen 2 even though they run slightly higher all core clocks at default settings.
They only run hot when overclocked,or in the small boost window at the start of any task,but that is what users want,you want a high boost if that means that something will finish earlier,if not if OC doesn't improve time (enough) you can keep your CPU at default you can even use IXTU to make profiles for anything where OC makes sense.
https://www.extremetech.com/computi...0x-and-ryzen-7-3900x-reviewed-red-storm-ryzen
7nm-prime95-294b8.jpg
Nice work inserting a deceptive graph into your post and completely ignoring the contents of the article around the graph. : 3

They are not testing just the CPU power consumption there, and are definitely not testing CPU heat output. That is system power consumption, including X570's relatively power-hungry PCIe 4.0 chipset, a feature the Intel systems lack, and a big part of why even idle power consumption is around 20 watts higher, and why the chipset has its own fan for active cooling. The motherboard they tested is one of the more power-hungry X570 boards too, or at least was at the time of publication, though BIOS revisions may have potentially changed that since. To quote the opening of the power consumption section of that article, appearing a little above that graph...

Update: 7/08/2019: While the results below contain some test data for the 3900X, we’ve followed up this testing with additional chipset evaluations. The X570 chipset uses far more power than the X470 and the Ryzen 7 3700X is a far more power-efficient CPU if paired with an older chipset. More details can be found here.

Oh look, they even linked to an article the day after the review's publication with updated charts pointing out that the Ryzen 3000 CPUs are actually "far more power-efficient" than their review initially made them appear. Testing on a more comparable X470 board using PCIe 3.0, the 3700X system drew 15 watts less at idle, 33 watts less under load, and 37 watts less under peak load in that same Prime95 29.4b8 benchmark, putting load and peak power draw below any of the Intel systems tested there. In the newer version of Prime95, which they also tested, load power draw dropped by 47 watts, and peak power draw by 48 watts on an X470 board compared to what their review showed. And in Cinebench R20, a more "real-world" example, load and peak power dropped by 44 watts in the multithreaded test, and by 17 watts for single-threaded. And not only did the 3700X draw less power than the 9900K in Cinebench, especially if we look at peak boost power where the 9900K system was drawing 90 watts more, but it also finished that particular workload around 8% faster than the 9900K.

As a comparison of CPU power consumption, that initial graph is not particularly useful. First, they are testing Prime95 in that chart, which isn't really indicative of a real-world workload, and can cause CPUs to throttle in different ways depending on how they are designed. It's also not clear exactly how long each of the Intel processors is maintaining their much more power-hungry boost clocks for, or how much work each processor is actually performing during that synthetic workload. Additionally, they are testing system power consumption, but using four different motherboards to do so, each with a different chipset, with one board drawing substantially more power than the others. That doesn't make for a particularly accurate representation of how much power each processor is drawing, as there are too many additional variables clouding the results.
 
  • Like
Reactions: TCA_ChinChin
What might these "node tweaks" be that you speak of? You mean architecture tweaks?


While I was surprised at the 9900ks power consumption, they really just scraped the very top cream of the 9900k top for those - literally - its the exact same chip. Tech reviewer BitWit had a better bin with his old 9900k then he did with his 9900ks -- the 9900k consumed less power at 5.1 than his 9900KS did. This means that some early 9900K where actually better binned , and Intel just separated out two tiers of 9900k and called all the crappy ones 9900k and the better ones 9900ks - so if you buy a 9900k today, it will be almost guaranteed to not be as good as a 9900k purchases a year ago. A few reviewers confirmed this with their findings. Binning tighter is going to tricky ...

"aye givener all she got cap'n, an she canna be milked no more!"

The 9900KS is already a real tight bin, and as I mentioned, there's only so much the laws of physics can be thwarted without resorting to unicorn poo, which doesn't exist.

The 10900k will very most likely be the hottest and most power hungry desktop chip ever made, save for a couple skus in HEDT format with more than 3x the cores ... I'm not tryingt o make an emotional argument, this really is extremely likely.
Time will tell. Not much to debate until reviews. You are forgetting about the AMD FX 9590. I don't think any CPU will takes it's crown for power consumption.
 
As I have said on a different thread, extremetech is the only publication I have read that shows this. Toms, Anandtech, KitGuru, etc... all show that the 9900k draws MORE power even during extended tests. A lot of publications don't use Prime 95 anymore for power tests because it acts like a Power Virus anyways. Therefore a good conclusion from the extremetech numbers would be that the 9900k is actually thermal throttling because it is drawing so much power.

Direct from Tomshardware review of the 3700x "We're moving away from using AVX-based stress tests for our CPU power testing, though we will continue to use them for their intended purpose of validating overclocks. AVX-based stress testing utilities essentially act as a power virus that fully saturates the processor in a way that it will rarely, if ever, be used by a real application. Those utilities are useful for testing power delivery subsystems on motherboards, or to generate intense thermal loads for case testing, but they don't provide a performance measurement that can be used to quantify efficiency. "
Yes,thank you very much,agreed 100%
But still that's the power draw everybody claims that the 9900k has and the new ones will have.
Also there are others that show the power draw to be very close,for stock speed it's 185 for the 3700x vs 211 for the 9900k not exactly such a hughe difference if you factor in the clock difference....
https://www.bit-tech.net/reviews/amd-ryzen-7-3700x-review/6/
zolohI1.jpg
 
Last edited:
You mean like thermal throttling? What reason would it have to lower its clocks? Intel just plain doesn't want to give you full performance after 2 minutes "just because"? Or is it to keep it from overheating?
That's what TDP is, AMD does the exact same thing with ZEN2, and not only that but they don't even reach the full clock rate they are advertising...
Why would I measure power draw at "clock for clock" when zen2 has higher IPC and is made to clock a fair bit lower? ... unless I was just trying to make Intel look better than it is, is the only reason I can think of as to why I might do that.
Why would you clock one at 4 and on 5ghz and then talk about efficiency ... unless I was just trying to make AMD look better than it is, is the only reason I can think of as to why I might do that.
Intelligent comparisons would be "stock vs stock", "best OC vs best OC (for those who care)", or power / per performance -- all of which reviewers have done at some point, and results of which clearly shows zen2 consumes less power and has better temps under full loads.
Guess what the extremetech bench does exactly that,this is what stock settings is and that's what TDP is,you save up sopme power from being idle and let it all go in a short burst,after that is the sustained load.
"
In our case, the two Intel motherboards we tested appear to implement the chip manufacturer’s intended thermal and current limits — but this has a definite impact on how our Intel CPUs behave under load.

After a relatively short period of time (8-20 seconds, typically), the Core i9-9900K, 9700K, and 8086K will all yank back hard on the metaphorical throttle. "
What sort of magic is going to keep the 10900k from being "hot as Hades" under a full load? Unicorn poo? Come back to reality land ... we're losing you ... 😉
It will be if you overclock it and run a power virus on it,nobody claimed that under those conditions it wouldn't be.
DEFAULT SETTINGS ARE A VERY DIFFERENT THING THOUGH.
 
Nice work inserting a deceptive graph into your post and completely ignoring the contents of the article around the graph. : 3

They are not testing just the CPU power consumption there, and are definitely not testing CPU heat output. That is system power consumption, including X570's relatively power-hungry PCIe 4.0 chipset, a feature the Intel systems lack, and a big part of why even idle power consumption is around 20 watts higher, and why the chipset has its own fan for active cooling. The motherboard they tested is one of the more power-hungry X570 boards too, or at least was at the time of publication, though BIOS revisions may have potentially changed that since. To quote the opening of the power consumption section of that article, appearing a little above that graph...



Oh look, they even linked to an article the day after the review's publication with updated charts pointing out that the Ryzen 3000 CPUs are actually "far more power-efficient" than their review initially made them appear. Testing on a more comparable X470 board using PCIe 3.0, the 3700X system drew 15 watts less at idle, 33 watts less under load, and 37 watts less under peak load in that same Prime95 29.4b8 benchmark, putting load and peak power draw below any of the Intel systems tested there. In the newer version of Prime95, which they also tested, load power draw dropped by 47 watts, and peak power draw by 48 watts on an X470 board compared to what their review showed. And in Cinebench R20, a more "real-world" example, load and peak power dropped by 44 watts in the multithreaded test, and by 17 watts for single-threaded. And not only did the 3700X draw less power than the 9900K in Cinebench, especially if we look at peak boost power where the 9900K system was drawing 90 watts more, but it also finished that particular workload around 8% faster than the 9900K.

As a comparison of CPU power consumption, that initial graph is not particularly useful. First, they are testing Prime95 in that chart, which isn't really indicative of a real-world workload, and can cause CPUs to throttle in different ways depending on how they are designed. It's also not clear exactly how long each of the Intel processors is maintaining their much more power-hungry boost clocks for, or how much work each processor is actually performing during that synthetic workload. Additionally, they are testing system power consumption, but using four different motherboards to do so, each with a different chipset, with one board drawing substantially more power than the others. That doesn't make for a particularly accurate representation of how much power each processor is drawing, as there are too many additional variables clouding the results.
Great so you agree that there are tons of things that can greatly affect power draw.
So where does the thing come from that everybody claims that the 9900k uses 250W and that apparently that is the only power draw it has no matter what?
Which benchmark did that come from?
 
On an unrelated note, is anyone else bothered by that photo? It makes no sense - the CPU on the right obviously doesn't fit the socket. I'm like: "well, you already picked your mobo, so I guess you're going with the CPU on the left". (facepalm)

Heh, so it's Shutterstock photo? ...more like a shudder stock photo.

Besides that, when was the last CPU that Intel even made with pins?

uzgXZeWBSSpmgpz94LF5w9-650-80.jpg

The most recent CPU it could probably be is a Bulldozer-series AMD model. I think socket AM3+ is their last PGA socket without a gap in the middle.

203px-AMD_AM3%2B_CPU_Socket-top_closed_PNr%C2%B00376.jpg

And I don't care how bad Comet Lake is - there's no way it's worse than any AM3 CPU.

Edit: I like these pics that Anandtech uses for their CPU comparison articles. Not only is it interesting and (potentially) relevant, but original content FTW:

MultiChip-Up-_Fixed_678x452.jpg

Yes actually. I thought the exact same thing.

But to be fair TH has had a bad run of articles in the past few years and probably many on the staff are not PC hardware enthusiasts but more journalists at heart so it doesn't matter as long as the picture looks "cool".
 
  • Like
Reactions: bit_user
Great so you agree that there are tons of things that can greatly affect power draw.
So where does the thing come from that everybody claims that the 9900k uses 250W and that apparently that is the only power draw it has no matter what?
Which benchmark did that come from?
The 250W comes from an OC 9900K running AVX in Prime 95 or ~200W running stock. https://www.tomshardware.com/reviews/intel-core-i9-9900k-9th-gen-cpu,5847-11.html When it got time for the Ryzen 3700X test, Tomshardware went away from using Prime 95 for its power consumption tests due to unrealistic scenarios.

In his review of the 9900K (https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9900k-i7-9700k-i5-9600k-review/21), Dr. Ian Cutress at Ananadtech (his PHd is in electrical engineering so he knows a bit about electrical power) goes into a little discussion about TDP and how they measure power on a CPU. "For our testing, we use POV-Ray as our load generator then take the register values for CPU power. This software method, for most platforms, includes the power split between the cores, the DRAM, and the package power. Most users cite this method as not being fully accurate, however compared to system testing it provides a good number without losses, and it forms the basis of the power values used inside the processor for its various functions." His findings for maximum power draw (package) for the 9900K is 168.48W. Compared to the Ryzens at 90.26W/3700X & 142.09W/3900X the max power of the Intel is much high. https://www.anandtech.com/show/14605/the-and-ryzen-3700x-3900x-review-raising-the-bar/19

The biggest issue with the 9900K is that it has a TDP of 95W. Intel only gives the TDP based off of the base clock and so once a chip gets into its all core boost it will have a much higher TDP. https://www.anandtech.com/show/13544/why-intel-processors-draw-more-power-than-expected-tdp-turbo This extra power usage equates into heat, but an OEM can use a HSF designed for 95W and be fine. However, that will affect your performance as the CPU will throttle once the HSF can no longer handle the added thermal load. With the Ryzen 1000 & 2000 series AMD's TDP was almost spot on. The Ryzen 2700X has a 105W TDP and a full package load of 107.41W but the 2600 has a 65W TDP and a full package load of 76.21W. https://www.anandtech.com/show/12625/amd-second-generation-ryzen-7-2700x-2700-ryzen-5-2600x-2600/8 These full package loads are far closer to TDP than what Intel says so if you have a 105W HSF for the 2700X you shouldn't thermally throttle. However, with the 3000 series AMD changed up their metric a bit so now a 65W TDP has a full package power draw of 90.26W.
 
  • Like
Reactions: King_V
... You are forgetting about the AMD FX 9590. I don't think any CPU will takes it's crown for power consumption.

No I'm not.

Initial reviews 9590 reviews all have the 9590 between 300w and 350w for FULL SYSTEM draw .... the 10900 is rumoured to pull 300w from the SOCKET (CPU ONLY). Note the difference between full system draw and CPU only draw.

(note my use of hyperlinks to articles to support my words)

If a 9900k can pull 250w at the socket under a full AVX load, then it's likely pulling as much power at the socket in this scenario as a 9590 - the 10900k has 2 more cores, 4 more threads and higher clocks ... I think only a fool would think it won't draw more power. We just have better coolers these days so we can more easily accept these ridiculous power draws than we could back when 9590 launched.
 
Last edited:
to be fair TH has had a bad run of articles in the past few years
Yeah, I meant it as a somewhat superficial comment. I would definitely prefer they put their time & energy into the content, not the photos.

Somehow, Anandtech seems to do alright with like 1/10th of the articles. But, their articles are often very, very deep, and they don't seem to have many staff on the payroll.
 
That's what TDP is, AMD does the exact same thing with ZEN2, and not only that but they don't even reach the full clock rate they are advertising...

TDP is thermal throttling? er ... No. AMD all core boost sustains itself quite consistently under load. Gordan Ung from PCWorld pointed this out in the Zen2 reviews ... this is why in his reviews the 9900k got stomped on in multi-threaded loads - he used longer running tests than most reviewers do and the 9900k throttled its clocks back after a bit of testing while the Zen2 parts pulled ahead as soon as that happened. So no, its not "just like AMD does" ... and you snide remark about boosts is about 3 months too late to be relevant.

Why would you clock one at 4 and on 5ghz and then talk about efficiency ... unless I was just trying to make AMD look better than it is, is the only reason I can think of as to why I might do that.

A cpu at 4ghz doing same work, pulling less power and running cooler than one at 5.0ghz is the more efficient CPU. High clocks don't make for an efficient CPU. In fact the laws of physics and reality dictates that the higher the clocks, the less efficient the CPU will become. So yeah ... I think I know what I am talking about, please don't insult my intelligence.

Guess what the extremetech bench does exactly that,this is what stock settings is and that's what TDP is,you save up sopme power from being idle and let it all go in a short burst,after that is the sustained load.
"
In our case, the two Intel motherboards we tested appear to implement the chip manufacturer’s intended thermal and current limits — but this has a definite impact on how our Intel CPUs behave under load.

After a relatively short period of time (8-20 seconds, typically), the Core i9-9900K, 9700K, and 8086K will all yank back hard on the metaphorical throttle. "

... huh? ... Intel's style of throttling back boost after certain amount of time isn't what TDP "is" ... again. One way to look at it is, it's a way to make the TDP look lower than it really should be, as a marketing tactic. I think only the very most hardened fanbois look at the 9900ks "95w" TDP without a raised eyebrow ...

It will be if you overclock it and run a power virus on it,nobody claimed that under those conditions it wouldn't be.
DEFAULT SETTINGS ARE A VERY DIFFERENT THING THOUGH.

The 9900k as it is, is hot at default settings ... as the link to Tom's review I posted above attested to ... "it would be absurd to use air cooling ..."

And from TrustedReviews: "
I was initially using the Corsair H60 liquid cooler for my testing rig. It’s done a fine job previously, whether it was cooling the Intel Core i7-8700K or AMD’s Ryzen 7 2700X. Boot up the rig with the i9-9900K installed though, and the Corsair H60 is helpless in preventing a system crash.

Despite boasting two 140mm fans that can run a max speed of 2400rpm, the Corsair Hydro Series H100i v2 Extreme Performance Liquid CPU Cooler didn’t do a much better job either. It at least prevented further system crashes, but heating issues persisted, restricting the processor from exceeding its base clock speed of 3.60GHz, which consequently affected benchmark results. " -
the reviewer then had to install an even better liquid cooler yet to finish the review. I wonder how good the stock air cooler from the 12core 3900x would work on an 8 core 9900k? Any guesses?

How is adding 2 more cores, increasing clocks, and raising the TDP by 30w going to make it run cooler than 9900k, even if it is better binned? You're really betting it all on that unicorn poo aren't you?

Back to my comment ... its going to run hot as hades under full load. I find it very amusing that you refuse to consider this.
 
Last edited:
Considering that the 3900X has a lower total package power than the 9900K, it probably would be working on tilt the entire time to cool the 9900K.

I think that is a given ... but I'd actually be interested in seeing someone give this a try, just for shitz'n'giggles.

It's a pretty good cooler, but the 9900k will only work well with a couple of the very best air coolers money can by -- pretty much all the reviewers have stated this.

For 9900k its a 120 AIO minimum or go home, and certainly if you plan to OC. Even Tom's dual 360mm rad custom closed loop was getting a bit warm in their OC testing .. .and that's about the best cooling money can buy, period, without going to a chiller.

But from what I have been hearing recently, Intel is sprinkling the 10900k with unicorn poo and it will only consume 125w of power, and will only get hot if you have a power virus ... so no worries at all there.

I also find it amusing how my initial comment about the 10700k being the far better choice over the 10900k, one of the reasons being heat and power draw savings, was turned into a argument about Intel vs AMD ... just a casual observation. :)
 
Last edited:
Why would you buy something that would perform less though?

Very simple: If I don't need the top of the line performance, and I can get, say, 85% of the performance for 50% of the price, and that last 15% of performance that i'm giving up is NOT a make-or-break issue (and that's quite often the case), then I will buy the lesser-performing CPU.
 
...
Chasing AMD wouldn't be worth it anyways. There are markets that are more valuable than the enthusiast market like HPC, AI and FPGAs etc.

My thoughts are that once Intel has a new node stable enough for wide release and something like Forevros is cheap enough to implement widely we will see a major shift in their desktop market platforms.

Right now though they need to, and are, focus on markets where margins are higher. The enthusiast market is easy to win back. All they would need is a chip that is on par core count, lower or equal power and higher performance at a really good price point. Other markets are harder to win back.

This is all mostly true, from Intel's standpoint. I'm just commenting that some of the enthusiasts are becoming, or will be becoming, a little bummed by this lull - however long it ends up being.
 
This is all mostly true, from Intel's standpoint. I'm just commenting that some of the enthusiasts are becoming, or will be becoming, a little bummed by this lull - however long it ends up being.

I doubt it. AMD was in a very similar position with K8, It was an overall better product that uses less power and provided more for less. Pretty much during the entire Pentium 4 and D era. There were one offs such as specific use cases where the power difference didn't hurt Intel or ones like the Pentium D 805 that was cheap and could be heavily overclocked to beat Extreme Edition chips and Athlon 64s. But for quite a few years AMD was in a position of advantage. They spiked to 40% of the desktop market (not just enthusiast) in 2006 just before Intel unleashed Core 2.

After some time though AMD hit a capacity issue, didn't have enough FABs to produce, and started to charge more since they were the favorite for enthusiasts and servers.

Then came Core 2 and it was a massive turn around. Gone were the days of Intel being way higher in power draw, cost and performance and here was a new era of low power, high performance and fantastic cost. Better yet these things overclocked well and even better with the second gen.

Then during that time we saw AMD run into issues after taking a massive leap of faith and Intel had 90% of the desktop market share. Most enthusiasts bought a Core CPU. Those who bought AMD bought only for two reasons usually, brand loyalty or was what their budget could handle.

AMD is back to a much higher market share and growing. But we had 6 years of Netburst based CPUs with AMD eating away at Intels desktop, server and mobile shares, right now AMDs biggest gains are desktop with server and following in last mobile. If Intel survived that then its safe to assume that they will survive this and we will see something more drastic in a few years. Either way Intel needs to punch back otherwise AMD will become complacent and we will start to see prices trend higher to show for it.
 
I doubt it. ...

Doubt what? Enthusiasts being bummed by Intel not spanking AMD in retaliation for making a comeback?

I'm already seeing it ... not too many people got excited about 10th gen HEDT did they? How many people revered Intel HEDT just a couple years ago? Intel fighting back against 16 cores with ... 10 cores! Get what I am saying? Fanbois won't be bummed no matter what, but that's not who I am referring to.

I never said Intel won't "survive", make a come back at some point or at least be clear that they are trying hard.

Are you sure you're reading my posts correctly? I'm pretty sure I am being more objective than you are seeing.
 
Last edited:
Doubt what? Enthusiasts being bummed by Intel not spanking AMD in retaliation for making a comeback?

I'm already seeing it ... not too many people got excited about 10th gen HEDT did they? How many people revered Intel HEDT just a couple years ago? Intel fighting back against 16 cores with ... 10 cores! Get what I am saying? Fanbois won't be bummed no matter what, but that's not who I am referring to.

I never said Intel won't "survive", make a come back at some point or at least be clear that they are trying hard.

Are you sure you're reading my posts correctly? I'm pretty sure I am being more objective than you are seeing.

Read my post in full. I basically pointed out how easily enthusiasts will migrate back to Intel if Intel has the better product offering. Short of brand loyalists and people who just want to pinch every penny the moment Intel has something to really compete with AMD or beat them enthusiasts will move back to it.

I never even said Intel was being competitive on a core count level. But to be fair Intel wasn't on the same "core" count level as AMD with FX but still managed to beat them.

Point being the market will always gravitate towards the better option even if said option is slightly more than the other. Most "enthusiasts" know this is normal for the market, its happened before and it will happen again just like AMD will slip and make a mistake again at some point like they did with K10 and FX. Just go with the waves. Anyone who lets this stop them from making a purchase of Intel in the future if they have the better offering should just quit the game.
 
Read my post in full. I basically pointed out how easily enthusiasts will migrate back to Intel if Intel has the better product offering. Short of brand loyalists and people who just want to pinch every penny the moment Intel has something to really compete with AMD or beat them enthusiasts will move back to it.
...
Fair enough, but I didn't disagree with this part (or almost any of your initial response), and changed my sentiment to it being a "lull" in my subsequent post, as opposed to "abandonment" that I initially stated (in somewhat a tongue-in-cheek sarcastic "editorial" manner, as I do).

We all saw how fast enthusiasts seemed to jump on the Ryzen bandwagon, so it does make sense.
 
Last edited:
No I'm not.

Initial reviews 9590 reviews all have the 9590 between 300w and 350w for FULL SYSTEM draw .... the 10900 is rumoured to pull 300w from the SOCKET (CPU ONLY). Note the difference between full system draw and CPU only draw.

(note my use of hyperlinks to articles to support my words)

If a 9900k can pull 250w at the socket under a full AVX load, then it's likely pulling as much power at the socket in this scenario as a 9590 - the 10900k has 2 more cores, 4 more threads and higher clocks ... I think only a fool would think it won't draw more power. We just have better coolers these days so we can more easily accept these ridiculous power draws than we could back when 9590 launched.
66158.png


Yes, these are full system power stats. Note that system power is 200W more than a i7 4770k system. Would you like to argue that the AMD system not including the CPU pulls the same wattage as a fully loaded i7 4770k plus the rest of the system? Clearly the 9590 was capable of pulling WAY beyond 200W's, which shouldn't be surprising considering even AMD rated the TDP at 220W. In order to hit 250W with a 9900k, you have to run a power bomb test like prime 95 (more stressfull than OCCT) and run AVX code (not possible on 9590). Even sadder, the 9590 wasn't even a true 8 core CPU.
 
66158.png


Yes, these are full system power stats. Note that system power is 200W more than a i7 4770k system. Would you like to argue that the AMD system not including the CPU pulls the same wattage as a fully loaded i7 4770k plus the rest of the system? Clearly the 9590 was capable of pulling WAY beyond 200W's, which shouldn't be surprising considering even AMD rated the TDP at 220W. In order to hit 250W with a 9900k, you have to run a power bomb test like prime 95 (more stressfull than OCCT) and run AVX code (not possible on 9590). Even sadder, the 9590 wasn't even a true 8 core CPU.
That graph is also a good representation of the cost of higher MHz. The 8150 is clocked at 4.0GHz vs 4.7GHz for the 9590. That extra clock speed resulted in a 50% increase in power power draw for about 20% higher clock speed. Easy to see that the FX was running at the upper limit of what the chip could achieve in clock speed.