AMD's Future Chips & SoC's: News, Info & Rumours.

Page 100 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
You want to watch a good side by side comparison of the 3600X vs. the 8700K(stock). 8700K running at 4,295 MHz vs. 3600X at 4,175 Mhz. Look at the side by side, and using 32GB of RAM. It's impressive! Around 10:26 seconds starts the real time side by side comparison.
I watched that video and was thoroughly impressed.

He said the 3600x performance was virtually identical to the more expensive 8700k. While he has been pro Ryzen in the past, praising Ryzen for its good value, he said the 8700k should only be bough if you already have a 300 series intel motherboard with something like an I3 and am looking to upgrade.
He also said he never reccomended Ryzen for 144hz monitors before, but now he would.
 

InvalidError

Titan
Moderator
Reviews of these CPU's seem to be rare 3600x & 3800X
AMD didn't provide review samples for launch, main reason probably being that it wants to push reviews (and sales) of 3700X and 3900X first. I'd expect the 3800X to get crucified for providing marginal benefit over the 3700X, so not surprised at all that AMD didn't provide samples of that one, won't make sense until it drops to something like $350.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
Ryzen 3000 & Navi Megathread

Submitted by Wendell on July 6, 2019
With Auto Overclocking, I would say I saw a max of about 4.65ghz 1t performance. In the review guide provided by AMD, they suggested 4.3ghz all-core was not unreasonable, but with PBO just set to "enabled" I never saw anything above 4.25 all-core.

However, I was able to set a manual overclock of 4.5ghz, all core, with a voltage of 1.45v, and things were mostly stable. This yields a score of over 9100 in CPU-z's built-in mini-benchmark.

I am not sure what the maximum safe long-term voltage is for 7nm but 1.45v for long-term use strikes me as perhaps a little high, so I would not recommend you do this.



https://level1techs.com/article/ryzen-3000-navi-megathread
 
Wendell says:
"Compared against Intel's' HEDT CPUs (X299), AMD easily wins for both single- and multi-threaded compute-focused workloads. It's shocking, really.
I would expect AMD's next-generation HEDT product to devour whole this entire market. "
also
"In real-world scenarios, AMD generally was at parity or did better. In synthetic benchmark scenarios, Intel might be slightly ahead for single-threaded or lightly-threaded apps (and with a 5ghz to 5.1ghz clock speed on the Intel side vs 4.6Ghz on the AMD side).
I think I've got no choice but to call this one for AMD, at least in real-world scenarios. It's generally the faster CPU now in all the "real-world" testing I've done so far. "
 

jdwii

Splendid
OK if anyone knows anything about IPC this chart right here says it all about Zen 2.

111185.png
 

jdwii

Splendid
What is going on this makes no sense to me at all 3700X losing to a 9900K in single threaded performance by 11% but wins in geek bench 4 by 13% in multithreaded both are 8C/16T. Based off this review it seems like the 3700X really does equal a 9900K in productivity tasks even with a much lower frequency I personally didn't expect it to be that good. Doubling AVX to 256bit per instruction was a smart move i think that is why they are getting these gains in some apps
 

jdwii

Splendid
So not just in cinebench but other real world applications we see higher IPC.


We need 1 8 core CCX per chiplet and more to improve gaming performance I know that's why it's behind.
 
What is going on this makes no sense to me at all 3700X losing to a 9900K in single threaded performance by 11% but wins in geek bench 4 by 13% in multithreaded both are 8C/16T. Based off this review it seems like the 3700X really does equal a 9900K in productivity tasks even with a much lower frequency I personally didn't expect it to be that good. Doubling AVX to 256bit per instruction was a smart move i think that is why they are getting these gains in some apps

Probably workload dependent. Performance characteristics can change based on what type of work needs to be done, and it's possible there's a bottleneck in certain workloads versus others.
 

JaSoN_cRuZe

Honorable
Mar 5, 2017
457
41
10,890
Intel get its advantage in gaming only from it's clock speed and ring bus architecture while Ryzen wins against Intel on IPC, Power consumption, Productivity & Value. If they can mitigate the gap of clock speed closer to 5Ghz similar to Intel, then Intel is doomed in all fronts.
 

InvalidError

Titan
Moderator
The X470 and Ry2K was a very clean launch at least.
Mostly ok, got quite a few people who weren't exactly pleased about how poorly compatibility with 300-series boards got communicated. People thought AMD's four year commitment to AM4 meant no issues upgrading, didn't turn out so good with AMD having to institute its bootkit program to take the PR heat off.
 
AMD didn't provide review samples for launch, main reason probably being that it wants to push reviews (and sales) of 3700X and 3900X first. ....

Note that 3600/X and 3900 use chiplets with two disabled cores on each. The 3600X does require higher quality silicon, though. And then note the 3800X requires a perfect chiplet module: that is, high quality silicon and no disabled cores.

I believe they want the perfect (or nearly perfect) chiplets dies to be held in reserve for Rome for now. They have that exa-scale super computer to build, after all, and that's gonna get a lot of attention.

This isn't like Ry2k; it's more of an all-new architecture. So it seems natural the growing pains will be more like Ry1k roll-out. I don't look for things to settle down until third or so AGESA update!
 
Last edited:
  • Like
Reactions: NightHawkRMX
You forgot that the 3700X uses a fully enabled chiplet too. The only thing extra the 3800X provides over the 3700X is slightly higher binning unlikely to be worth $70 more.
I think the 3700X would be a calculated risk maybe? Let go some lesser dies with all cores enabled so that you have as complete a product line as possible for reviewers to chew on at release. And it is a winner in it's class, as we've seen.

Rome is where AMD's big money is. Not only the exa-scale, but they're doubtless convincing the huge cloud server operators they can deliver. But they really need the traction in the enthusiast/home market place to appear 'relavent'. It's a tough choice and lot of pressure from product line managers. Engineering and manufacturing could easily be sweating every 8 core die they release to the desktop division.

I have to keep in mind AMD's still in a tough spot: they've seriously tweaked the nose of the lion king for three years running but the lion will respond. They have to generate cash to fuel product development (Ry4k and Ry5k) so they're ready for the lion's roar. Let's face it, Rome and server-class systems is what will do that, not desk top.
 
Last edited: