What optimizations? Through the extensions? SSE? AVX? For the most part there is very little optimizations for games to take advantage on a CPU. Unless they plan to use the CPU for say physics and push it through AVX.
But code is not as precise as you make it out to be. Here's a simple example. Prior to Ryzen, all AMD cores were real phyiscal cores, even with Bulldozer+. There were no 'virtual' cores and hence no SMT. If you were a programmer you thus could block your code from bothering with SMT load balancing and that would freely assign threads in sequence or at random (cores 1,2,3,4 or cores 1427 vs cores 1,3,5,7 with heavy threads and 2,4,6,8 for light threads on an i7 because the latter set are not real). This would create monster performance reduction on Ryzen as some cores completely bottleneck and others are left with no work, and not because the work could not be done by Ryzen, but would be preferred behavior for every other AMD chip. You'd have used it because it's 1 line of code and proper enumeration of features/SMT checking would take a page or more and thus make your program slower.
That is a real example taken from the Ryzen AMA btw.
This is also the point in time we are in when AMD is being benchmarked as the underdog. Nobody expects AMD chips to support modern technologies and has thus disabled many many technologies or simply implemented them only on the Intel side of the 'if def' code.
jimmysmitty :
My point was that the process is valid. It is a valid way to test the game. You nor I have no idea how games will perform in the future. Remember all the hype behind DX12? Still waiting for that insane performance boost.
Then look at the raw texels being pushed in Rise of the Singularity and you will realize that when someone with GTA money get's hold of it it will do a huge amount. It's the only reason VR is even an option on current gen consoles. You can't wait for that insane performance boost until DX11 cards are hopelessly outdated... and Windows 7 needs to go for that to be possible.
jimmysmitty :
And core count advantage has been touted since the FX series. Still waiting to see a game where it makes a big enough difference.
The only benefit to show 4K max settings is if you are reviewing a GPU.
Speak for yourself many people have Ultra 4K plans, or VR which can and does push as many pixels is some setups.
jimmysmitty :
That actually brings up another side people argue, driver optimizations. To be honest there is very little in that for CPUs unlike GPUs. I update my drivers pretty constantly and I have never seen a massive performance increase from installing the newest chipset drivers.
Chipset drivers are not CPU drivers. CPU drivers are provided in the form of microcode updates which are packaged inside your BIOS updates. Look at BIOS change logs next time where you see some have 'support added for x new chip' and that BIOS contains a microcode update which MAY then want to connect to an updated chipset driver version to complete the chain. Chipset drivers increase reliability and support for advanced commands and interfaces like new chips, bus configurations and microcode optimizations. They do not provide performance increases.
jimmysmitty :
I think Ryzen will get a bit better but I do not see it getting insanely better. It just is not normal and there is no way that they will somehow optimize for the CPU and see massive gains.
Even fundamental analysis makes me disagree greatly. Ryzen's 'slow' CPU to CPU interconnect is locked to RAM speed, so you actually get a boost of performance in-CPU whenever that update for 3200mhz RAM is out. This also means that IPC loss from multicores will go down simply as RAM speed goes up because the ring bus is locked to it and not the CPU bus, which is a crazy idea but ingenious. Most of of all this is a 1.0. Not even a 1.0. Ry+Zen aka Rise of Zen, not Zen. Equivalent to Core 2 Duo in terms of where AMD are at in the platform maturity curve. That makes Nehalem next, then Sandy Bridge all whilst Intel is eeking out single digit IPC gains per release at the end of thiers.
jimmysmitty :
I take that back. They will IF they finally code a game to properly utilize multiple cores. Until then gaming will remain as it is.
This makes me laugh, as if you could play a 2017 game on a single core processor today. Just because they all code for quad doesn't mean it will stay that way.
I never said that the code was precise. In fact code is never precise which is why games these days, being as complex as they are, always have room for improvement with patches, updates and driver fixes.
Bulldozer was not a true 8 core CPU either. They used a more advanced version of SMT but still shared some resources. It was actually something that was patched in Windows 7/Vista because the OS was treating it like a real 8 core CPU and loading each core one after the other. After the patch it would load every other core, in essence it would load the first core of a module then the next modules first core and once the first core of every module was loaded it would then start to load the second core. If it loaded it the other way around, doing the first module in the core then the second, there were performance drops due to the shared resources.
Intels version of SMT, Hyperthreading, suffered a similar issue when first launched with the Pentium 4 where some programs saw a performance decrease due to how the OS saw the threads. That was fixed and since its reintroduction in Nehalem SMT has benefitted performance as the OS knows how to
I don't take RoTS as a end all be all for anything because it is a specific game coded a specific way and would not mean anything to other game types. It is interesting to see but it by no means has been anything more than that.
DX12 has nothing to do with consoles. The PS4 uses a custom version of OpenGL and the XB1 uses a custom version of DX. In fact the XB1 already had a lot of the features of DX12 since consoles already work closer to the hardware and they do that because they are consoles with a set hardware a software. The reason DX was even created was to mitigate the vastly increasing number of hardware and software configurations which back before APIs a game crashing could cause a kernel dump.
I do agree that 7 needs to go before we can see the full benefits of DX12 however in that same vein so does Windows 8 as it is still only a DX11 OS.
Look at the 4K resolution from that survey. They are a vast minority, still only .78% of all surveyed Steam systems. VR headsets look like they are in about .24% of systems currently.
Now it is not the end all be all but the majority of gamers do use Steam so it is a pretty good point of data. And VR will go up in the coming years as will 4K. However 4K will only see a burst in adoption when mid ranged GPUs are able to play decently at decent settings and VR will increase when it becomes more cost effective and mid ranged systems can push them.
Bleeding edge does not always mean the majority, in fact mid range is the majority most of the times.
I did forget about the BIO but I still have never seen an insane increase unless that micro code update fixes an existing issue for performance. And any driver can increase performance, even chipset.
The biggest problem with AMD is this gain looks great because it had such a bland CPU to beat. AMD has basically caught up to Intel in terms of performance. I doubt they will have a Nehalem or SB jump. The reason I say this is because Intel has hit a wall and if a company that outspends their rivals by billions a year hits a wall so will AMD. They may be able to optimize but their biggest gains will be when they get a decent process that allows for better clocks than they have on the current 14nm LPP. Anything else will be a vast change in how the CPU works. I honestly do not think we will see a major CPU performance jump for another 2-3 generations from either side and by then I am hoping they are moving to new materials for the CPU itself.
I never said you could play a 2017 game on a single core. Hell you couldn't play Crisis on a single core due to how the game was coded, it basically used the second core for the audio processing.
However, software is always well behind hardware. Crisis was an outlier as it took a few years before dual cores became the bare minimum. And you can still game on a dual core, an i3. Sure it has SMT but SMT is still at best 20% gains in performance.
I know it will eventually happen, eventually we will all be sitting on 8+ cores as normal. Software still will take time to translate those advancements into performance gains.
We have had quad cores for 11 years. Intel released Core 2 Quad in 2006. As said you can game on an i3 which is a dual core with SMT. Maybe in another 2 years a quad core, true not SMT, will be the bare minimum.
I'm interested in why so many AutoDesk products were tested for the "Workstation" workload. What I would much prefer is a benchmark consisting of a flow analysis simulation, stress analysis, and geometry recognition and import speed for each major CAD suite. Would be fairly simple to do, you just need to have it canned for Creo, SolidWorks, and Inventor/AutoCad.