[SOLVED] Save for Ryzen, or buy Intel now?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

mangaman

Honorable
I've been thinking of upgrading my PC to a Ryzen CPU for quite awhile now, but that cost much more money than just upgrading my current CPU. Currently, to upgrade my system to a Ryzen 5 2600 would be around $270. While just upgrading my Pentium G3258 to an Intel i5 4690k would be around $80. My question is, should I wait and save for a Ryzen, or just upgrade my current CPU right now?

To upgrade my system to a Ryzen, I'll need install Windows 10. Which will be a bit tedious and time consuming to get all of my games transferred. However, the Ryzen 5 2600 has much more cores and threads than the i5 4690k, as well as having much less security vulnerabilities.

On the other hand, the i5 4690k is a much cheaper option than the Ryzen and there is in need to reinstall my OS. Pretty much just swapping out the CPU's. However Intel's track record of security has not been so great recently with their CPU's. Sure many of these security vulnerabilities can be patched very easily, but who's not to say that more might be found in the near future?

I'm just conflicted, should I save for Ryzen, or just buy an i5 now? My main games being played are Witcher 3, Flight Simulator, Cities Skylines, BeamNG and Yakuza 0.
 
Solution
I would say upgrade to the i5. It is still a good CPU. The 270X will still be ok for games for a bit. As far as performance goes it still falls between the GTX 1050 and 1050 Ti. Plus the i5 would give you the option of upgrading that to something like a GTX 1650 Super a little later on, which would make for a very reasonable 1080p high settings machine.

Save up after that and upgrade again in another 2 years or so when the Ryzen 4000 series is mature or the 5000 series comes out. There is also the possibility Intel will get it together and release another awesome generation between now and future upgrade.

In my opinion you need an upgrade now, and even something like a 2200g is only going to be on par with a Haswell i5 for a lot more...
Problem is the platform. There are no 6 core Intel prior to 8th gen. Included in the difference between the 4th gen i7 at 4c is a sizable IPC bump at 8/9th gen, so the additional 2 cores isn't the whole picture. That's the real reason 6 cores is recommended, new platform, not the older quad platforms. You don't see any recommendations for the i3-9100f or i3-9350k at 4c, because even with higher IPC, even an older i7-7700k beats them, it takes an i5-9400f with the 2 additional cores to match it.

There's no real future proofing in a 6/6 cpu. They aren't much, if any better than a 4/8 as far as that goes. You'd need to jump to the I7-9700k to get anything resembling longetivity. 4/4 or 6/6 cpus already have 1 foot in the grave, not going to take all that long before they aren't any different to a G3258 or any other dual core cpu as far as gaming ability goes.

Nanometer process will hit a brick wall at 5-7nm, silicon won't support lower. Cpus have been stuck at @ 5GHz for over 6 years now, there's only one place left to go, thread count. You get far more work done on 2x 4.0GHz threads than 1x 5.0GHz thread, consoles have known this for years, using 8 core 1.8GHz processors to do what 4 core pc's do. And game devs already know this well, it's easier to port a 8thread console game to an 8thread pc than trying to cram it into 4 threads. Many more modern games, from gta5 on up are far happier and smoother on 8 threads than 4, even if the 8 threads are slower.
 
Nanometer process will hit a brick wall at 5-7nm, silicon won't support lower

Samsung and TSMC both are on track for 3nm around 2023-2024 😉 Intel around 2025. All three are aiming at 1.4nm to 1.5nm by no later than 2030.

CPUs have been stuck at @ 5GHz for over 6 years now, there's only one place left to go, thread count.

That's actually the lazier way to do it. Reinventing an architecture is an insanely expensive task but it's the best path forward for real innovation and massively better IPC.
 
That's actually the lazier way to do it. Reinventing an architecture is an insanely expensive task but it's the best path forward for real innovation and massively better IPC.
Conventional CPUs are used mainly to run sequential algorithms and only a finite amount of instruction-level parallelism can be extracted from typical code before the amount of effort per incremental IPC gain becomes ridiculous. All architectures inevitably hit a brick wall there. Also, the more logic you cram in a CPU core to make it do more things per clock tick, the more difficult it is to increase clock frequency. In other words, a "massive increase" in IPC if at all achievable by adding a ton of execution units and register file entries, would come at the expense of much lower clocks.

ARM is a completely different ISA and its IPC isn't any better. Same for MIPS, POWER, RISC-V, IA64, SPARC, etc. There is no miracle instruction set for IPC. If you want massive throughput, you have to go with thread-level parallelism on massively parallel hardware such as GPGPU.
 
Well Intel hasn't done any real re-inventing in a while, not since Sandy-Bridge, it's all really been just modifications, slight changes to keys etc. Ryzen was definitely an invention and I'm happy amd stuck around for it to happen.

I don't see 3nm for desktop use. Nevermind 1.5nm. Samsung has a massive task there just dealing with voltages and electromigration. That's the brick wall at 5nm with anything approaching 5GHz speeds in a cpu, you are already dealing with amperages in excess of 60A at 7nm/10nm/14nm. That much power, with that little insulation between nodes is a recipe for disaster.

Ram, mobile, storage, sure fine 10w-15w usage at most. That's a far cry from dealing with the 250w the flagship cpu can hit. At 1.5v that's 167Amps. That's massive power. And barely contained at 14nm. At 5nm you'd have nothing but a smoking hole in the motherboard.
 
I don't see 3nm for desktop use. Nevermind 1.5nm. Samsung has a massive task there just dealing with voltages and electromigration. That's the brick wall at 5nm with anything approaching 5GHz speeds in a cpu, you are already dealing with amperages in excess of 60A at 7nm/10nm/14nm. That much power, with that little insulation between nodes is a recipe for disaster.

Not possible on finFET... quite possible passing finFET which is what $60 billion dollars is doing. The processes to hit 3nm can be extended a few more reductions before they once again have to find a different way. I definitely think you should do some reading on that, it's not an "IF" it's a "When" question.
 
I have. 5nm limits with finfet, possibly you'd get 3nm if they could get TFET or CFET to work right, they've already got 3nm dialed in for a nano wire stack, but can't get that working right either due to resistances. Pretty much everything points to a very long 7nm process, 5nm only for more specific releases and nothing further without some other sort of epic breakthrough. Maybe photonic. It's possibly 20x faster than electronic and doesn't suffer from many of electrons fallibilities.
 
It's possibly 20x faster than electronic and doesn't suffer from many of electrons fallibilities.
I suspect packing a billion photonic gates into a convenient and cost-effective package being a challenge would qualify as a candidate for understatement of the century. Tunneling FET might be the last major advancement for consumer-level computing for a very long time.
 
Epic breakthrough.. And yes, understatement of the century. Wouldn't take much for source, just something to supply the photons, it's the billions of lenses/mirrors etc or gates if you will that make it equitable to a lightsaber. A vision, until somebody comes up with a way to make it work.
 
I have. 5nm limits with finfet, possibly you'd get 3nm if they could get TFET or CFET to work right, they've already got 3nm dialed in for a nano wire stack, but can't get that working right either due to resistances.


Transistor Options Beyond 3nm
[QOUTE]
Despite a slowdown in chip scaling amid soaring costs, the industry continues to search for a new transistor type 5 to 10 years out—particularly for the 2nm and 1nm nodes.

Specifically, the industry is pinpointing and narrowing down the transistor options for the next major nodes after 3nm.

FEBRUARY 15TH, 2018
[/QOUTE]

Samsung Unveils 3nm Gate-All-Around Design Tools
[QOUTE]
Samsung declared that its Product Design Kit for 3nm chips is now in alpha

"MAY 16, 2019"
[/QOUTE]

TSMC becomes first to announce R&D for 2nm node

[QOUTE] TSMC's first 3nm plant in Taiwan will be opened by 2021 and being mass production in 2022

JUNE 2019
[/QOUTE]

Maybe I'm missing where you think this isn't happening? You have three companies going about it three different ways and all of them have said they are on track for launching within a certain time frame.
 
For a desktop cpu? That might be true for mobile or other low wattage usage, but last I heard Samsung doesn't make/design desktop cpus. Very large difference in scope between what a Galaxy S10 needs and what an intel i9 9900ks needs. So yes, I do believe you are missing...
 
For a desktop cpu? That might be true for mobile or other low wattage usage, but last I heard Samsung doesn't make/design desktop cpus. Very large difference in scope between what a Galaxy S10 needs and what an intel i9 9900ks needs. So yes, I do believe you are missing...

Are you sure about that?

Samsung to supply PC CPUs to Intel
Published : Nov 28, 2019 - 10:06 Updated : Nov 28, 2019 - 10:06 The Korea Herald
Samsung is reportedly making 14nm CPUs for Intel as supply issues persist
November 28, 2019, 8:21 PM Techspot
Update: Intel Reportedly Taps Samsung To Produce Supply Constrained Desktop CPUs
November 29, 2019

You couldn't be more wrong on whom has the capacity though, TSMC, Samsung, and Qualcomm all can supply 14nm desktop CPU's. TSMC is at near full production capacity with AMD and Nvidia so what they could provide doesn't match what Intel needs, whom does that leave?