Question Anyone else slightly disappointed by the new Ryzen?

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Dec 10, 2018
24
2
15
Atm I'm slightly disappointed with how the new Ryzens are looking, I was hoping to get a 8/16 but I need to Get a 3800x for that, and there isn't much point in that, 429$. I can buy a 2700 for 200$ atm what is the point apart from the pcie 4.0
 
Sigh...

Most benefits for users with low requirements are usually ancillary ones to the CPU itself (as someone else pointed out) and belong to the whole platform:
  • More RAM. Because Chrome and OEMs slowly putting more each gen, I guess?
  • More SSD/NVMe options.
  • Better/more external connectivity options. See motherboards with WiFi and Bluetooth in them. Some offer really good upgraded encryption and other modern support options. Specially BT4.x. USB3+ support, or even USB-C ports could be a compelling option to some.
  • Security updates (mostly Intel, lel).
  • Better fixed-function sub systems (AES and enterprise instrumentation; or VP9/VA1 hardware decoding, for example).
  • Lowered thermals. Yes, this could be something your average Joe Doe can be pleased with.

We enthusiasts are nothing but a dirty niche of the bigger picture of users. Think outside the mud puddle, everyone. Also, most big companies and corps use leasing for their hardware anyway. It's like they have a fixed-cost for most of their hardware so upgrades to them is on a set cycle. Special requirements are... Well... Special requirements.

Cheers!
 
As an employee, would you personally throw $600 at a PC to improve your productivity by 3% by reducing the the 10% of the time you are waiting for your PC for things you can actually just sit in front of the PC and wait for by 30%? No, you'd just keep using whatever the company provided and unless it is grossly inadequate, you wouldn't mind it much. A 30% improvement still does not really matter to you directly, it only matters to the company if it thinks it'll get that much extra productivity out of you. For any operations longer than that, most companies have no shortage of ancillary work to do in the meantime and the net productivity gain is even lower.
That's a fair point and I sure won't be discussing that - I'm still rocking Core 2 Quad at work, because once these oldies are fed a 64-bit Windows, 8 Gb of RAM and a SATA SSD, they really do well at office stuff. And yes, a 3770k is even better at that - or a 4670k (what I used to have).
Mind you, that's for office work.
Once you start hitting need for real horsepower (CAD, rendering, gaming etc.) then you'll take as much CPU horsepower as you can get - and here, a R5 1600 was already a very worthy upgrade... in 2017. Now that quad core has become the baseline, anything you do will run faster with more cores. And if you had decided, back in 2017, to build a R5 1400 machine on a B350 or X370 chipset, and you DO need the horsepower, then knowing that you can increase your CPU power by 30% to triple it (going from a 4-core, 8 thread, 3.6 GHz 1400 to a 12-core, 24-thread, 3.9 GHz 3900X) for $500 is a downright bargain when compared with Intel's proposals.
And don't tell me my best case is a stretch! Because the R5 1400 was pretty much a poor man's i7 when it came out, and as such was a nice pick if you wanted to get a "modern" machine on the cheap back then. Worst case though, going from a 1700X to a 3900X would yield "only" 70% more CPU power - but then the only way to get more oomph would require one to get into HEDP territory.
 

InvalidError

Titan
Moderator
And don't tell me my best case is a stretch! Because the R5 1400 was pretty much a poor man's i7 when it came out, and as such was a nice pick if you wanted to get a "modern" machine on the cheap back then. Worst case though, going from a 1700X to a 3900X would yield "only" 70% more CPU power - but then the only way to get more oomph would require one to get into HEDP territory.
The stock Ryzen 1400 would be the VERY POOR man's i7 since the i5-3470 is still up to 20% faster in single-threaded performance when not using AVX2 instructions (not supported by IB) and only ~10% slower in multi-threaded benchmarks despite the 1400's SMT advantage. An actual i7 would be another 20-30% faster on top of that.

In the case of most user-interactive software like CAD and games, having 200 cores is of limited use since the user interaction management is usually a single-threaded bottleneck on everything else when the user isn't, hence the desktop CPU market's still heavy emphasis on single-threaded performance despite more simpler slower cores (GPU-like) being far more cost and power efficient.
 
The stock Ryzen 1400 would be the VERY POOR man's i7 since the i5-3470 is still up to 20% faster in single-threaded performance when not using AVX2 instructions (not supported by IB) and only ~10% slower in multi-threaded benchmarks despite the 1400's SMT advantage. An actual i7 would be another 20-30% faster on top of that.

In the case of most user-interactive software like CAD and games, having 200 cores is of limited use since the user interaction management is usually a single-threaded bottleneck on everything else when the user isn't, hence the desktop CPU market's still heavy emphasis on single-threaded performance despite more simpler slower cores (GPU-like) being far more cost and power efficient.
I don't know where you pulled your 20% extra speed from (UserBenchmark puts it at 10%, 12% tops in single threaded), especially now that all the Spectre/Meltdown patches have been bogging it down; if, to you, an 'actual' i7 is the 7700k from pretty much the same time, then yes, an extra GHz clock speed and 15% better IPC did give the i7 7700k a 40+% performance lead - for 2.7 times the price. Heavy (especially when you consider that the i7 comes with no cooler and needs a pricy motherboard to run).
As for CAD, most of the interaction's slowdowns actually come from how well is the software optimized to handle the GPU (and since I had to set up and fine tune a workstation for a Catia, AutoCAD, SolidWorks and Cinema4D user recently, I think I'm qualified to spak about it), and once that is done, how fast it can move data in RAM. Granted, early Ryzen with original BIOSes sucked at moving data - but this changed rather quickly. And user interaction is all well and good, but when you need to render your project, nothing beats a bunch of CPU cores casting rays left and right - and there, nothing in the x86 world of 2017 beat a nice old R7 1700 for the price.
 

InvalidError

Titan
Moderator
when you need to render your project, nothing beats a bunch of CPU cores casting rays left and right - and there, nothing in the x86 world of 2017 beat a nice old R7 1700 for the price.
When you need a ton of processing power for embarrassingly parallel stuff, nothing beats GPU acceleration for the price. $2000+ GPUs may sound expensive but they are still much cheaper per TFLOP than desktop CPUs.
 
As GPU-accelerated stuff becomes increasingly prevalent, whatever software still doesn't support GPU acceleration will lose more customers to software that does.
These very specific applications that cost thousands of bucks per seat and have 10 to 20 years of legacy code embedded in them don't usually adopt new stuff fast, and are even slower at dropping stuff - there's a reason why they forced Khronos to keep a Compat mode in OpenGL.
 
No need to wait for reviews to be disappointed with core-per-dollar stagnation. Assuming the best IPC and clock gains possible, we're still talking only 25-30% more performance per dollar than two years ago, which isn't particularly exciting either. Sure, this is better than Intel's 5-7%/year for most of the past eight years, but not something most people will find worth bothering with if they already have anything somewhat recent.

I thought I'd be upgrading to a 3600 myself, turned out it wasn't the CPU I was hoping for. Maybe I'll get a 3700X if it drops below $200 next year.
Lol core per dollar stagnation, we were at 4 core Max for what 10 years?
 
Until eight years ago, performance still doubled every 2-3 years and prices for that performance were still dropping. My disappointment is with the brakes coming off after 10 years only to come back on for the next 4+ years.

I feel like thats more because AMD took a good long while with Zen in the oven in order to not have a repeat of bulldozer, resulting in such a significant increase in performance. Now that they are releasing essentially yearly (or every 2 years) releases of new processor generation, I simply can't expect as much performance increase from Zen to Zen2 or Zen2 to Zen3 as I did from Bulldozer to Zen. It is a shame that they are more or less following the Intel method of tick/tock, but I still think the overall rate is up from the past 8-10 years.

I think a decade ago, it was easier to have revolutionary changes to architecture like Bulldozer to Zen than it is now. Also, there isn't as much instructional and software baggage back then, as there is now.
 

InvalidError

Titan
Moderator
I feel like thats more because AMD took a good long while with Zen in the oven in order to not have a repeat of bulldozer, resulting in such a significant increase in performance. Now that they are releasing essentially yearly (or every 2 years) releases of new processor generation, I simply can't expect as much performance increase from Zen to Zen2 or Zen2 to Zen3 as I did from Bulldozer to Zen. It is a shame that they are more or less following the Intel method of tick/tock, but I still think the overall rate is up from the past 8-10 years.
While it may be nearly impossible to increase single-thread IPC much further and clock frequencies have hit what appears to be a practical ceiling at ~5Ghz, there is plenty of room left to improve multi-threaded performance. If AMD had wanted to press its current advantage for all it is worth, I have no doubt it could have made the 3600 a $200 8c16t part and still turned more profit than it has for most of the past 10 years while drastically turning up the heat on Intel. Since Intel is going to be effectively MIA in the desktop space for the next 2-3 years though, AMD has every incentive to hold back so it can maximize profit instead of massively increasing market share.

AMD could have delivered way more than 20-30% more performance per dollar this year and completely obliterate Intel's lineup for the foreseeable future, it simply chose not to.
 
While it may be nearly impossible to increase single-thread IPC much further and clock frequencies have hit what appears to be a practical ceiling at ~5Ghz, there is plenty of room left to improve multi-threaded performance. If AMD had wanted to press its current advantage for all it is worth, I have no doubt it could have made the 3600 a $200 8c16t part and still turned more profit than it has for most of the past 10 years while drastically turning up the heat on Intel. Since Intel is going to be effectively MIA in the desktop space for the next 2-3 years though, AMD has every incentive to hold back so it can maximize profit instead of massively increasing market share.

AMD could have delivered way more than 20-30% more performance per dollar this year and completely obliterate Intel's lineup for the foreseeable future, it simply chose not to.

I agree with that. I remember Adored's information and I did in fact believe that for a while until times closer to actual launch and there were more leaks on Ryzen 3000. Its a shame that AMD didn't choose to essentially lower everything by one tier of pricing cause they certainly could have done that. It means that they are gonna profit a lot more, at the expense of customers. For better or for worse, it's still better than Intel, but unfortunately, that's the only other competitor AMD has in CPUs.
 
For me it will take either my wife's or son's fx-systems to go down to think about trying a 3000 on my x470.
I really don't see that much from the specs that would really benefit putting a 3000 on b450 or x470 mobo.
The 500 series supposedly going to be expensive to get the full potential of the 3000's.
1 benefit though
The 3000's mem controller frequency is 3200 now.
So for those that feel they need 32gb 4x8 of ram at least they should be able to get closer to the sweeter spot of 3200 campared to the 2933.
We'll see in time just how high we can get 16gb 2x8 's or 32gb 2x16's 3600-3800-4000-4200 only time will tell.
Personally i don't have that kind of money to experiment with so i will wait praying that the 2 fx system mobo's don't go down in the mean time of reviews / issues brought up here on TH.
But if 1 does i will most likely put a 2700x on mine and move the 2600x to a new mobo and ram to my son's and his 8350 to the wife's if anything happens.
His is fx 8350 on a gigabyte 990fxa-ud3.
Hers fx 6300 on gigabyte 970 sli

Have a good 1 guys/gals i will sit back and keep Praying for now.
 

InvalidError

Titan
Moderator
The 500 series supposedly going to be expensive to get the full potential of the 3000's.
The chipset itself is only ~$10 more, give it a few months for the dust to settles on those ridiculously overbuilt halo launch products for early adopter enthusiasts, then we should see more reasonable x570 boards that cost only $20 or so more than their x470 counterparts pop up.
 
Ya the higher core counts core clks look good for those that want or need that.
But for what i do or need would be way overkill and not more but to say ya i got what ever build bragging.
So ya a little disappointed that some of the rumors definitely didn't come to fruition like the 6 core becoming and 8 and 8 to 12 core and core frequencies.

If and when hopefully not to soon though
( praying to the FX GODS to keep those 2 fx mobo in good health) LoL
I myself would probably go no higher then the 3700x.
And at that time prices for 500 series would determine wheither it be x470 or x/b5?? As you say when the dust settles.
Don't get me wrong $50.00 over what i paid for my x470 won't stop me but $???.00 Will certainly make me think is it really worth it.
Later Guys
 

InvalidError

Titan
Moderator
It’s not really the brakes coming on you can expect them to offer 16 cores the gen after 8. They’ve bumped it up to 12 with the 3900x
Not particularly helpful for performance per dollar as when you move up the product stack, performance per dollar almost always gets worse. Also, moving up the product stack is only an option if you have no specific budget (price point) in mind.
 
Not particularly helpful for performance per dollar as when you move up the product stack, performance per dollar almost always gets worse. Also, moving up the product stack is only an option if you have no specific budget (price point) in mind.
They’re not going to sell a 12c CPU for 5 quid mate. 8 cores is relatively new, you get a decent bump with the shrink plus higher clocks across the board. You’re getting around intel IPC and 6cores 12 threads for 200 compared to 4 cores 4 threads for 250 in the 7600K or 4/8 for 350 with the 7700K
 

DMAN999

Dignified
Ambassador
^ Personally if I can get a Ryzen 7 3700x for around $300 next year and can sell my Ryzen 5 2600 for $100, I will be very happy as long as the 3700x performs as well as I believe it will.
But obviously I will not make any decision regarding upgrading until I see how the 3700x performs on my ROG Strix B450-F Gaming MB.
For me the worst case scenario I foresee is that I upgrade to a Ryzen 5 3600 and OC it like I am currently doing with my Ryzen 5 2600.
Either way I am happy with my current setup and won't even consider upgrading for at least six months or so.
 
Last edited:
  • Like
Reactions: Crosslhs82x2