cryoburner :
bloodroses :
The other issue with high core count is there will be times where you'll have idle cores as they're awaiting previous instruction results from currently running cores. It is part of the reason why clock speed (if the same architecture) is still king. A perfect example of this is the benchmarks for Dolphin (Wii emulator), where Intel's slightly quicker architecture combined with higher clock speeds make AMD chips (including Ryzen) look pale in comparison. This will continue to improve as compilers and developers get better, but it is not a quick process.
The Dolphin emulator would actually be a somewhat poor example, since it's not a game itself, but rather an emulator of a console with a single-core CPU with an entirely different PowerPC architecture. Recreating the hardware of a processor core in software is something that's always going to be inefficient, and unlike a typical game, is not a process that can be divided up. Now, if we were emulating a multi-core console, having additional cores would likely make a massive difference, since each could be simulated on its own core.
Another thing significantly affecting the results would be the fact that much of the time-critical code in an emulator is going to be manually optimized using low-level code over the course of many years, and tailored specifically for the hardware that the developers have been using. A new processor microarchitecture is not going to benefit from many of these CPU-specific optimizations, so it's not just a case of increased clocks improving performance. In any case, judging by the performance I saw in some videos of Dolphin running on Ryzen, the CPUs seem to handle the emulator just fine.
Back to games, it actually shouldn't be that hard for big developers to optimize for additional cores. There are many parts of a game that can run mostly independent from one another. Processes like AI, physics, sound processing, character animation and so on typically shouldn't need to constantly check-in with one another more than once per frame. Many of these tasks could also be split up further, so you might have three cores dedicated to physics, for example, which could each operate on different objects that don't need to directly interact with one another. It's not really much of a step from optimizing a game for four cores, to optimizing a game for six or more cores. There is, of course, the fact that the maximum performance will be limited by the most-demanding thread, but anything that can be split off from that thread to another core should help even things out.
It is worth noting that most developers want their games to run reasonably well on mid-range hardware though, and four cores are still the norm on the Intel side. Intel will apparently be launching 6-core CPUs for their mainstream platforms within the next year or so though, so combined with Ryzen, we might start to see more games try to make use of more cores, even if the install-base of four-core parts will be around for a while.
ddferrari :
Wow- so many AMD fanboys here. "Sure, it sucks right now compared to the i5 for gaming, but just WAIT for the future! Tom's is biased!" Sure...
Aren't you the same group of bozos who were also defending AMD's GPUs last year? "Just you wait for DX12 man- it'll make a 480 eat the 1080 for lunch! DX12 is the future!"
Uh-huh. Google "new DX12 titles 2017" and then be very quiet- you'll actually hear crickets chirping.
https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support#Upcoming_games
Aren't you the same group of bozos who were also defending AMD's GPUs last year? "Just you wait for DX12 man- it'll make a 480 eat the 1080 for lunch! DX12 is the future!"
Uh-huh. Google "new DX12 titles 2017" and then be very quiet- you'll actually hear crickets chirping.
https://en.wikipedia.org/wiki/List_of_games_with_DirectX_12_support#Upcoming_games
It definitely does sound like there's some kind of overzealous "fanboy" here, I'll give you that. : D
I'll only reply to this to point out that linking to a Wikipedia list with an empty "Upcoming games" section for DirectX 12 does not, in any way, imply that game developers have suddenly stopped using DirectX 12. It's just not something they need to announce. Considering that over a dozen AAA games included support for DX12 in 2016, and over half of Steam users are now on Windows 10, I would hardly expect developers to randomly stop supporting it. And of course, there's Vulcan too, the other big new lower-level API that tends to favor AMD GPUs and minimizes CPU load.
True- Wikipedia is hardly the only go-to for info, but I honestly saw very little results anywhere when Googling DX12. One would think that any developers who are developing DX12 games- and dropping teasers- would mention it... after all, it seems some gamers actually base purchasing decisions on the API (I'm guessing 470/460 owners).
Either way, it certainly appears that the DX12/Vulkan hype-train has quieted. Especially since Nvidia's cards aren't always being beat in DX12 titles. It's a good advantage for AMD, but hardly a slam dunk.