Question How many cores for 2020 and beyond?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
10 years for the 2600k is something I would consider an anomaly in the industry due to the mining craze that hurt gaming which in turn hurt the demand and upgrade cycle of PCs as well as the absence of AMD as a serious threat to Intel. I don't think that scenario will repeat in the next 10 years.

The 2600k's overclocking capability has also helped it stay viable. The increase in maximum single thread performance in the last decade has been about 50%, so if overclocking gained you 20% more performance, you're able to keep up with almost half of all the top-end processors produced in the last decade--and that's quite a feat!

But think back to earlier generations like lga1366 and lga775. While there were some select chips that did really well at OC and were able to stay viable, none were produced en masse or as easy to OC as the 2600k (except maybe the q6600). Hence why very few of them were around 10 years after their introduction as people moved to the next generation.

History will teach you a lot about the future. With the upgrade cycle in full swing again, that means upgrades every 3 years to stay 'current'. You can skip a generation or maybe two depending on what you're using your system for (even much more if you don't tax it much), but software will also dictate upgrades even if the hardware is still capable--and that's the current game--forced upgrades to generate revenue. Of course, with intense applications and use, only the upper crust will be fast enough and that's been a never ending target since the 1990s.

I built some very expensive systems in the 1990s that were intended to last for many years. This was a time when applications didn't change much, gaming was in its infancy and you didn't 'need' OS upgrades or patching continuously. Even then, we found that we had 'opportunity cost' by not upgrading simply because things could run faster and we could get our work done sooner. It didn't make sense to not upgrade when time was money. Once the bloat began in operating systems with windows 95, it just took off from there and that brings us to today. Even corporations bringing up the topic of TCO (total cost of ownership) didn't really change the acceleration too much.

I think as AMD continues to dominate the single thread and value space in the market, Intel is working on a 'one-up' solution--which AMD will then counter with and so forth. And as long as developers can truly take advantage of these newer developments (like ray tracing for example), you'll see fairly rapid market adoption and then commonplace. And once that happens, an upgrade will be necessary for sure. And then the game continues. :)
 
Reactions: numnums

numnums

Honorable
Jun 29, 2013
99
0
10,630
0
The 3900x isnt much faster today, as i said the 3600 is all you need for today.

However if you intend to keep your CPU for years and years, splurging today will allow for more longevity.
Thank you for all the advice! I think I'm gonna wait until this summer when they reveal the Ryzen 4000 series... Once they do a price drop for the 3000 series I think I will get the 3600 :)
 
Currently AMD is WAY WAY more power-efficient.
That's just not correct,with default settings the power draw of intel is right in line with zen,you have to look at overclock power draw for intel to draw a lot more power but as I already said ~4Ghz is also much slower than ~5Ghz.
that's power draw,power efficiency is a whole different story again.
Energy Efficiency
In this section, we measure the total amount of energy consumed for a SuperPi run (single-threaded) and Cinebench (multi-threaded). Since a faster processor will complete a given workload quicker, the total amount of energy used might end up lower than on a low-powered processor, which might draw less power, but takes longer to finish the test.
https://www.techpowerup.com/review/amd-ryzen-7-3700x/18.html
 
Every single review i see shows opposite.

Also that test is a single core test, so most of the cpu isn't even used.

https://ibb.co/Mkhk98Y
I put in the link so that people can look at all the benchmarks,they also measure multi-threaded and the difference is like 6% that's not that much more and unless you spend all your time with your CPU running at 100% you have to factor in the dingle threaded as well.

Even your picture only shows 20% more power draw and there is no additional info,what clocks did they run is this the max peak or the constant draw,and most important how lang did each one take?
I already quoted this before,efficiency is not how much the CPU draws in an imaginary stress endless loop but how much energy it uses to complete a task.
Energy Efficiency
In this section, we measure the total amount of energy consumed for a SuperPi run (single-threaded) and Cinebench (multi-threaded). Since a faster processor will complete a given workload quicker, the total amount of energy used might end up lower than on a low-powered processor, which might draw less power, but takes longer to finish the test.
 
Power shmower--if you want something to be 'powerful' it uses full power. :D

The funny thing is that computers are essentially using much more power than back in the day (220w used to be a lot), but if you look at how much computing power there is per watt then the number of watts is really low.
 

ASK THE COMMUNITY

TRENDING THREADS