synphul :
What kind of a statement is that, the i5 will be able to perform 2.4m draw calls in dx11 but the fx 8350 will be able to perform 14m draw calls in dx12 so it's better than an i5? And yet when the i5 tops the 8350 AGAIN, it's irrelevant? That statement is irrelevant. Let's compare apples to apples shall we?
You're missing the point. The point is that the FX bottlenecks at its 1.whatever-m drawcalls, but the i5 doesn't bottleneck at 2.4m. When the FX can have 14m draw calls, it's impossible for it to be a bottleneck at the same games that are not bottlenecking with 2.4m. Yes the i5 is better. But if neither of them bottlenecks, the extra money is practically a waste.
synphul :
Using a gtx 970, dx11
i5 - 1.3m draw calls
8350 - 1.2m draw calls
They were virtually identical, so draw calls weren't what was holding back the fx 8350 in the first place.
Or maybe the difference is between that small 100k draw calls. But let's assume it isn't. What would it be then?
synphul :
It should also be noted (as it is in just about every dx12 review) that draw calls are simple one metric, not the whole story. It will help alleviate the bottleneck fx 8 core cpu's have had with high end gpus but it's not going to make the cpus any stronger. It just isn't.
I never said it would make the CPU stronger. I said it will make use of all the power that it has that goes unused in games nowadays. Draw calls are indeed only one metric, but it's an indication of how fast the CPU can feed the GPU.
synphul :
Theory means little, results are all that matters. If theory worked, amd's bulldozer/piledriver wouldn't have flopped. Desktop cpu's aren't having to go wide, amd chose that path.
Oh they must now...
synphul :
Intel hasn't had to work very hard, their 4.5yr old cpus are still ahead of an fx 8350.
4.5 years... That means start of 2011, that means, you're referring to the i7 2600k? Yeah. Single core performance was very slightly faster. Multicore performance was similar. Price of the FX was MUCH better, making it a better value... But meh.
synphul :
When you're 10 laps ahead in a race there's no reason to floor it. That's why performance hasn't increased much. When it comes to gaming, intel's cpus haven't been a bottleneck for years, they're still waiting on the gpus to catch up.
Uh... Performance hasn't increased much because we're nearing the end of
Moore's Law and transitioning into
Rock's Law.
GPUs catching up... Not really. It's not like GPUs are 'behind' CPUs. Graphics are simply easier to be pushed than other computations. Changing low res textures to high res textures alone puts a huge difference in load on the GPU. Any CPU can be brought down to their knees by simply using a lot of physics, a lot of light sources or a lot of AI units on screen. CPUs are actually still the limitation.
Using Skyrim as a reference in the way that it was done there is bullocks. Skyrim is a very single threaded game and no results on video encoding have been presented. With no reference to what the video-encoding is actually doing, there's no true picture. If the video encoding is also single threaded, only two cores are used on both CPUs. Obviously the i5 wins in such a case. If the encoding uses all the free cores, then the i5 will still win in Skyrim due its faster single core performance, but the FX will be finished earlier with encoding the video since it has 7 weak cores available while the i5 has 3 strong cores. But obviously that's not shown in the results presented, so still saying the i5 wins in multithreading is short-sighted, and not understanding what's really going on.
synphul :
Looking at current dx11 benchmarks, considering the draw calls are so close between the i5 and fx 8xxx, why is it that i5's and i7's continue to dominate the top of the benchmarks?
Multiple answers. It's a mixture of all of them.
- Small differences in draw calls actually are the difference between bottlenecking and not bottlenecking, even if it's 50.000.
- Draw calls are not constant during real-time performance, and the more 'peaks' there are, the lower CPU will bottleneck more often. This explains how the FX can sometimes reach the framerate of an i5, but simply experiences a lot more fps drops since it's harder for it to keep the CPU filled.
- The difference in draw calls when using an AMD GPU is actually huge between the i5 and the FX. We're talking about more than twice the draw calls here, so drivers are also an influence.
synphul :
Along comes dx12 and increases draw calls substantially, but again their comparative output is similar in this regard. So where does the rest of the game's performance rely on? The cpu, as before. Dx11 didn't keep game devs from coding the rest of their game engine to run on multiple cores or threads.
It did keep them from doing it efficiently.
synphul :
The inherent cpu performance remains for each of the chips and intels have the stronger design. Even an i7 is a quad core, competing with amd with twice the cores. You don't see intel going wide for this very reason, when a well built quad core keeps up with the competitions octacores, the additional hex and octacore designs from intel are pure icing on the cake and carry luxury prices as they have no competition. Intel didn't suddenly cut everything but their extreme lineup on 2011v3. The don't have to.
The FX-8 is not really a true octacore. It's basically the same idea as an i7, except the AMD equivalent of 'hyperthreading' is more efficient, which enabled them to call them eight core CPUs. You already know this probably, but these CPUs have four modules. Each module has one full main core, and a secondary core that complements the first core. Unlike Intel's hyperthreading that are basically 30% of a full core, the secondary core is more like 70% of a full core. Intel made the better choice for the time that they released their CPUs.
synphul :
Your theory about the guns firing on screen goes back to why intel can do more with less. It processes the data that much faster. Only in a scenario where core performance was identical would core count be more relevant. While it takes xyz time for an amd core to process 8 guns firing, intel has processed those 8 and gone on to process the next 8 before amd's done with its first 8 gun calculations. Which explains how intel cores get more done with half the hardware. If shouldering weight in a vehicle getting xyz product to get to an end point and my vehicles can only handle a half ton of material vs the competitions 1 or 1.25 tons per vehicle, my only hope is to be able to run twice as many. We've seen this the past 5yrs between the two different architectures.
AMD tried to compensate for this by adding the secondary cores, but they go by unused most of the time. Intel's cars can drive faster. The AMD cars need to drive slower, and well, with DX11, six of the eight lanes are blocked. The roads will be opened with DX12.
synphul :
Ashes of singularity is still in beta and I certainly don't see any benchmarks out for it yet. In its current phase it's still a speculation fest. Speculation does very little for me, good bad or otherwise. What matters are benchmarks, pair a real game with actual dx12 encoding on a real set of cpus and gpus and see what really happens. That's pretty much all I'm interested in, the rest is make believe. I can draw out 50 ways on paper why amd's weak multicore approach should have been better than intels fewer/stronger core approach but at the end of the day it just doesn't pan out. People make dx12 out to be an amd only boon and it couldn't be further from the truth. It will be beneficial to everyone and gaming in general.
Ashes of Singularity has shown enough real time demos to warrant its legitimacy of keeping all threads completely busy.
DX12 is not beneficial for AMD only. But AMD will receive the biggest boost in both their CPUs and GPUs. You don't have to believe me. The benchmarks will speak for themselves.
synphul :
The bit about amd being ahead of its time is starting to sound like a broken record and a way to boost morale when their selling points don't pan out. That's what was said about their x64 cpus, that's what was said about bulldozer and piledriver - that's the reason they put mantle on pause and told everyone to go dx12.
But it's the truth. GCN was also ahead of its time. In that case it was both a good thing and a bad thing depending on your perspective. In the case of bulldozer it was mostly a bad thing. AMD was wiping the floor with Intel's Pentium 4 for a reason. But they risked too much after that.
synphul :
Let me guess, if zen doesn't deliver on its promises - it too will be ahead of its time?
Depends on what it is. Things are not labeled ahead of their time because of a brand name. They are labeled ahead of their time when it's an innovation that people don't end up using soon enough, or that (almost) reaches EOL and only afterwards they end up being used like they were intended to. I can tell you that the Fury X is also ahead of its time. But it's only shortly ahead of its time. It will wipe the floor with the Titan X under DX12. AMD simply can't feed the Fury fast enough under DX11. But not that it matters in this discussion. We will see what Zen will be. People are under the impression that I think DX12 is magic. It's not. I simply know what it will do when used properly. But everyone likes to think that things will stay exactly the same as they are, and that things will change exactly the way they have in the past, which is slowly and only more taxing. This time it's different
synphul :
Maybe if they focused more on humbly being in the current time, they'd actually compete for a change.
Would suit them great actually, considering they have to cut back on their research department. They have spent a lot of effort innovating on things that go by unused. They overestimate the intelligence and adaptability of their market.
synphul :
Look at all the smart tv's out and about, all the various new gadgets to try and get basic internet connectivity to users televisions. Does anyone think it's a new concept? Webtv existed in the mid 90's, 20yrs ago. Many doubtful have heard of it, but it did just that. I guess they were ahead of their time too but in the end it wound up a failure. It was innovative, unique, lots of things - but the bottom line, it ceased to exist. Everything kind of comes down to that bottom line, is it getting the job done and is it getting it done better than the competition.
You are right. That's the exact definition of being ahead of their time. Same goes for Kinect on the Xbox for example, or the eye toy for the PlayStation. Being ahead of your time is not a good thing for business most of the time. But it does show the vision that a company has.
But I'd like to make you think a little with a basic question. AMD has the reputation of being power hungry and weak, the exact opposite of what anyone would want in consoles these days. So, why were their CPUs used in the most recent consoles, and not Intel's? Why is the next Nintendo console also rumored to be using AMD? If a $150 i5 is better than a $150 FX-8 in performance, power consumption and heat, why not use the i5 in consoles?