Amd Ryzen Threadripper & X399 MegaThread! FAQ & Resources

Page 11 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
1


I queried Charlie to explain the difference between packages. He didn't. Not that I would trust him, even if some day he chose to reply. This is the same Charlie that wrote that ThreadRipper has only "two dies" and that everyone mentioning four dies was an "idiot". He has been disproved before.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
1


Nope, because throughput is the number of cores used times the performance of each core.

That is why AMD did spend lots of resources to develop a new core with much higher IPC than predecessor cores, and also the reason why the stock frequencies of the top RyZen models were pushed high until the limits of the 14LPP process, even violating the official TDP specs.

If single thread performance was irrelevant, AMD would simply port Excavator or Jaguar to 14nm, and ship 32 cores for mainstream and 128 cores for enthusiast.
 

goldstone77

Honorable
Aug 22, 2012
2,244
9
11,965
85


single threaded performance as it's used in gaming has reached a point of diminishing returns!
 

goldstone77

Honorable
Aug 22, 2012
2,244
9
11,965
85


Testing methodology to find out which is faster it works as a metric. It's value as a metric and how it relates to gaming performance is what I question including for future use! The reason CPU's or hardware in general are tested and reviewed is for the consumer audience for the most part. The meaning is subjective, but when the average person looks at number they see, "Oh look Shiny!" They probably don't realize that because they are not using a titan or 1080Ti like the reviewer/tester, and a 144HZ monitor the difference will never materialize for them. While having 50-100% more cores would be incredibly beneficial. As the above chart at 1440p shows, single thread performance isn't 100% reliable means in which to ensure future capability at higher resolutions. New technology can come into play changing the status quo.
 


There's a bit of "chicken and egg" with games. We all know there are games that actually do go wide with CPUs and have no problems spawning heavy threads for AI or physics or other random stuff they might need to calculate that doesn't fit in a GPU, even when the GPU might not be utilized to a 100% (DOTA 2 comes to mind here, again, for example) and there's the other majority of them that doesn't really go wide (FPS'es are usually the big offenders here, except for ID Tech engine) and do need IPC+speed to crank more frames and smoothness out of them. Now, if you already have a ton of metrics to define (or see) how well a CPU can go wide you can leave some games you know go wide out of the picture and just use a couple good examples (playing a bit of devil's advocate here) for that purpose, assuming the GPU will never be the primary bottleneck in that case.

So, I am not bothered by reviewers using AAA games that require "big ST performance" (IPC+speed) as the main game showcase, as long as there are metrics to cover the "wideness" of how a CPU performs. And going back to the original "low resolution" argument, it is not unrealistic to say that in 1 or two GPU generations, you will see a CPU bottleneck at any given playable resolution. That is how the evolution of technology has been and I'm not even sure we've seen the end of good IPC increases. As I've said, you have plenty of ancillary things in a modern CPU that can affect the overall package performance, including speed.

Cheers!
 


No it hasn't; what's happened is CPUs are "powerful enough" where the GPU is almost always the bottleneck. But as we saw with Bulldozer, CPU bottlenecks can and do still exist.

Also remember: CPU Performance = Single Core Performance * Number of Cores. Single Core performance is a multiplier, and a more reliable one then adding more cores as you always get the full benefit (unlike additional cores, which may not be used).
 

goldstone77

Honorable
Aug 22, 2012
2,244
9
11,965
85


No it hasn't; what's happened is CPUs are "powerful enough" where the GPU is almost always the bottleneck.
This is exactly what I've said. Single thread performance is at the point of diminishing returns in gaming! A much slower Ryzen can keep up with a overclocked i7-7700K, and beat it with a 900-1000MHz frequency deficit in some games. Single core performance as it is with gaming is absolutely without a doubt at a point of diminishing returns! I refer back to graph I linked in previous posts. There comes a point when faster doesn't matter anymore. And with new games and games optimized for Ryzen that is absolutely becoming the case. And like I said when new technology this can change the status quo. Single threaded performance is no longer a 100% indicator of future performance, because it's not an accurate indicator in performance right now! Note that the graph is in 1440p and not 1080p!
 
This is exactly what I've said. Single thread performance is at the point of diminishing returns in gaming! A much slower Ryzen can keep up with a overclocked i7-7700K, and beat it with a 900-1000MHz frequency deficit in some games. Single core performance as it is with gaming is absolutely without a doubt at a point of diminishing returns!
And then, a year or so from now, you upgrade your GPU and don't see any FPS increases because *surprise*, GPU bottleneck went away.

Seriously, you're repeating the same exact tired argument from the Bulldozer days.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
1


Nope. What happens is that HU/Techspot force frame-limiting and GPU-bound settings, and that gives the impression that RyZen can keep the pace with an 7770k using Vega, but it is an artifact from their odd settings. The 7700k is much faster at gaming and that is why even AMD uses 7700k in game demos of Vega


 
There is one counterpoint to this whole "but 4K is not realistic", well, it is. You can only test with what is out there to see how 4K would be today. Just because the GPU bottleneck is evident, makes it a less worthy test for people wanting to know how 4K actually is on Ryzen.

You can caveat all you want those tests, but the truth of the matter is that Ryzen is as good as Intel's best in 4K *today*.

Cheers!
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
1


And again the points are ignored...

No one said that 4K is not realistic. It is realistic for those that play games at 4K.

What is being stated is (i) 4K is not the more popular resolutions to which people play games and (ii) 4K benchmarking is not testing the CPU, and doesn't give idea of which is the true gaming performance of the CPU, because the CPU is being bottlenecked by current GPU.

Not your case, but I find very interesting that some people want RyZen to be tested only at higher resolutions when compared with Intel chips, but the same people want RyZen to be tested at low resolutions when compared with Piledriver. Double standards again!





The same people that uses higher resolutions to pretend that AMD RyZen is on pair with Intel, doesn't use higher resolutions to claim that Piledriver is on pair with Ryzen.

The same people that uses low resolutions to demonstrate the huge performance gap between RyZen and Piledriver, doesn't want low resolutions to demonstrate the performance gap between RyZen and Intel.

Double standards!
 

goldstone77

Honorable
Aug 22, 2012
2,244
9
11,965
85


I don't know why you continue to use that old hardware unboxed review to make your points, because it is miss leading. This is a newer review and proves the point I'm making. Single thread performance is not an accurate measure to determining future performance, because it is not a accurate way to gauge gaming performance now. Here I list the exact same game by hardware unboxed using gtx 1080 and Vega 64. The FPS is only marginally lower than the titan performance in this title using the Vega 64 in this title! Also, you use that graph which show a larger disparity between the two processors by only displaying minimum frame rates, and being an older review! You are trying to manipulate the results of one game to try and prove your point when I display a benchmark showing the performance difference over a multitude of games, which Ryzen does perform better than the overclock i7-7700K in some titles. Which further proves my point that single thread performance is not an accurate measure of gaming performance now, and won't be one is the future. New technology like displayed with the Vega 64 proves to have a synergy with Ryzen which is clearly visible in some games. And as that technology improves one can expect the benefits in the future to only get better.

Here at 1080p we can see the single thread performance of the i7700K@4.9GHz beat out the measly 1600@4GHz.With this game below at a higher resolution, according to your belief the GPU bottle neck should be coming into play especially for a speed limited chip like Ryzen, but that doesn't happen. The much slower Ryzen chip is now out performing the 900MHz faster i7-7700K! Which again here proves my point! Future gaming performance can not be determined by single thread performance, because it's not an accurate measure of gaming performance now! Significantly higher performance now at higher resolution shows the synergy that is happening with Vega and Ryzen. It is reasonable to expect that this will continue to get better in the future.



And here in Prey you can look at this title, and it doesn't matter which card you use Ryzen smashes the i7 7700K@4.9GHz @ higher resolutions! This is what we can expect to happen in future gaming as resolutions increase above 1080p.

Core i7 vs. Ryzen 5 with Vega 64 & GTX 1080
Clash of Titans
By Steven Walton on September 15, 2017

Here again I give you a multitude of examples not just one.
 


"and that gives the impression that RyZen can keep the pace with an 7770k using Vega, but it is an artifact from their odd settings" -> When you say "artifact" in that context, then you are not implying it to be an "artificial" (hence, not realistic) effect?

I don't want to play semantics here, but you're really annoying when it comes to nitpicking things. That is not an impression, IT DOES KEEP UP WITH IT. That is the whole point of what I mentioned. Not only in some benchmarks, but in all of them as far as I can remember.

It is not unrealistic nor a bad test. It's just one of a lot of tests you can do with games. And like you say, it is informative to people that is actually interested in 4K gaming (or pushing a similar amount of pixels; VR/AR maybe?).
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
1


I continue using it, because (i) the concept of GPU-bottleneck has not changed in the last decade; therefore one is not forced to use a newest review. (ii) it compares different chips including old Piledrivers.

No one said that single thread performance is an accurate measure to determine future performance. The claim was other. This has been explained to you by gamerk and myself. In fact gamerk has been explaining the same since Bulldozer, still he see people making the same mistake again and again and again and again and again and again and again...
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
1


The word "artifact" in that context is referring to CPU performance, not to CPU+GPU performance.

How many times we have to explain what a "CPU test" is and why it has to be made at 1080p or 720p? How many times we have to explain that testing games at 4K is not a "CPU test"? Isn't a dozen of times explaining the same enough? Why the same argument and explanations have to be repeated again and again and again, and in each occasion the arguments are ignored? I star to believe this behavior is on purpose...
 

goldstone77

Honorable
Aug 22, 2012
2,244
9
11,965
85


Nope. What happens is that HU/Techspot force frame-limiting and GPU-bound settings, and that gives the impression that RyZen can keep the pace with an 7770k using Vega, but it is an artifact from their odd settings. The 7700k is much faster at gaming and that is why even AMD uses 7700k in game demos of Vega
Yeah, your argument doesn't hold water in several games at 1440p! Which is an indicator of future gaming performance as well as higher resolution will be adopted. And now you want to dismiss the review, because it doesn't favor your theory... As I said single thread performance is at the point of diminishing returns in gaming! Single threaded performance is good enough by Intel and AMD "NOW" in a lot of games that the FPS difference especially in newer games and games optimized for Ryzen is very small at lower resolutions. And like I said some games like Prey show Ryzen smashing the performance of the 7700K despite the 900MHz deficit at 1440p, which again proves my point that single threaded performance is not a accurate gauge of gaming performance now or in the future!


Edit: juanrga we have been talking about how single thread performance affects gaming this whole time.
 


Haha, I can finally wholeheartedly agree to something you say: "How many times we have to explain that testing games at 4K is not a 'CPU test' ?"

Indeed it is not a CPU test. It's a game tested in 4K to see how the CPU and GPU combo behaves. And guess what! Even at lower resolutions it's still not a CPU test, because you're still relying on how well the GPU behaves. You have CPU tests that indeed stress out the CPU alone, but why include games as well? It all goes back to the "what people want to know" -> More data, more information => better conclusions.
 

goldstone77

Honorable
Aug 22, 2012
2,244
9
11,965
85



What issues? Source? Your statement acknowledges the point I'm making! . Nothing you said changes my argument, which still stands single threaded performance is not a accurate gauge of gaming performance now or in the future!
 

liberty610

Honorable
Oct 31, 2012
392
0
10,810
12
I see now that I have come back to this thread for the first time in a couple weeks that juanrga is STILL auguring with everyone without really proving any valuable points lol.

Having said that, my 1050x Threadripper does me just find and dandy for all my gaming. All while rendering video in the background for my more production related tasks. I don't need digital numbers to prove anything to me, my performance in all categories has increased dramatically since I upgraded from my 7800k. And that's all I care about ;)
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790
1


Pardon? I see more people involved in the discussions. And I see arguments and facts.

Even AMD uses the 7800k for gaming demos, because otherwise it would degrade performance of Radeon GPUs.
 

8350rocks

Distinguished


Here is why 4K is not the most popular resolution:

1.) Competitive gamers cannot get frame rates they think they need on 4K yet, so they run 1080p or 1440p with stupid refresh rates.

2.) No 4K Laptops

3.) Casual gamers do not spend $500 on a monitor

4.) Single GPU 4K gaming solutions are only just now getting to be a reality in mainstream, previously it required multi-GPU setup.

Now that the landscape is changing, 4K will become the most popular desktop resolution as the ecosystem shifts to cards that can run 4K on a single GPU reliably, and the cost of panels has come down dramatically in the last 6-12 months.
 

goldstone77

Honorable
Aug 22, 2012
2,244
9
11,965
85
G.Skill Unveils AMD Ryzen and Threadripper Optimized DDR4 Trident Z RGB Memory Kits – Available in Up To 128 GB Capacities For X399
By Hassan Mujtaba 5 hours ago


G.Skill has officially announced their latest high-performance DDR4 Trident Z RGB memory kits for AMD’s Ryzen and Threadripper platform. The new memory kits ensure the best compatibility and support for AMD’s new generation of processors on the X370 and X399 motherboards.

G.Skill Trident Z RGB Now Fully Optimized For AMD Ryzen and AMD Ryzen Threadripper Platforms – Launching in Up To 128 GB Capacities
Now first thing first, we all love AMD’s new Ryzen and Threadripper CPU platforms. They offer high end performance and disruptive pricing that have shaken things up in the CPU industry. Now a new CPU platform takes time to mature as it receives updates in the form of fine tuned software and a range of supported and compatible hardware. In the case of AMD, memory support has been very crucial but manufacturers like G.Skill are working round the clock to fine tune the performance on their memory kits.
As such, G.Skill has announced today that they will be offering an AMD Ryzen and Threadripper specific line of Trident Z RGB DDR4 memory kits. These memory kits come in a wide range of options. We are looking at capacities of 16 GB, 32 GB, 64 GB and even 128 GB (8 x 16 GB) if you want to use X399 platform to its fullest potential. Also, we are looking at very optimized clock speeds of DDR4-3200 MHz (CL14), DDR4-2933 MHz (CL14 / CL16) and DDR4-2400 MHz (CL15). G.Skill has a unique naming scheme for all AMD optimized memory kits which will end with “TZRX” while Intel optimized memory kits end with “TZR” branding. Following is the list of all memory kits that are optimized for AMD platforms:

Trident Z RGB Memory Kits on AMD Platforms

AMD currently has two platform offerings, where Ryzen supports dual-channel with 2 or 4 memory modules and Threadripper supports quad-channel memory with 4 or 8 memory modules. To give a boost in memory performance to AMD number-crunching workstations and high-end graphic rendering systems, G.SKILL offers several selections for each AMD platform, including memory speeds of up to DDR4-2933MHz or ultra-high capacity at 128GB (8x16GB). For details on specifications and compatibility support of the new Trident Z RGB kits, please refer to the following table:
TZR”X” – The New Line-up
To differentiate between the new AMD-optimized Trident Z RGB kits from the original, look for the “X” at the end of the Trident Z RGB model numbers. Please refer to the following chart, which shows the difference in the model number:

OC Profile Support & Availability

These new Trident Z RGB models support OC profile support on compatible motherboards, just simply enable the OC profile in BIOS to achieve these high performance DDR4 memory speeds. These new models are scheduled for release via G.SKILL authorized distribution partners in October 2017.

Aside from being compatible with AMD platforms, the G.Skill Trident Z RGB kits are some of the best looking memory modules designed to date. The RGB Lighting on the top and the solid metal heatsink with brushed aluminum textures look and feel very premium inside desktop PCs. If you are the one who wants their PC to look great, these are some of the best memory modules you can purchase right now. The availability is stated for October 2017 but we will update you once these are available on retail outlets.
 


As I said, now that you have a multithreaded GPU driver, there is a much greater chance of performance loss due to a thread being assigned to a "logical" CPU core, causing a deadlock situation on the CPU.

Multi-GPU tests tend to be more CPU bottlenecked, but there's some really poor implementations of multi-GPU support to watch out for. Nevermind multi-GPU is basically dead (partly because it's now on the Devs, not the driver, to support it).
 

goldstone77

Honorable
Aug 22, 2012
2,244
9
11,965
85

AMD Bids Farewell To CrossFire After 12 Years, Retiring Brand In Favor Of mGPU
By Khalid Moammer
Sep 22
http://www.tomshardware.com/forum/245454-33-crossfire-faqs/page-26#20204602
 
Status
Not open for further replies.

ASK THE COMMUNITY

TRENDING THREADS