gamerk316 :
de5_Roy :
you guys know that once mantle goes wide and dx12 comes out, core i3 and fx will become interchangeable. i3 will still have worse non-gaming app performance compared to hex cores and higher. fx will still have higher power consumption and higher app performance. reduction of cpu bottleneck doesn't help amd alone.
To be fair, the i3 can spit out good FPS numbers because over a long time period, any flatlines get averaged out. But minimum FPS wise, and latency wise, its going to lag behind FX, even if it puts up similar FPS numbers. I'd still be interested to see what one could go at about 4.5 though...
I never thought I'd say this, but thank you. Every time I argue against low end Intel, I show the tech report FX 8350 review where Pentium G gets pretty much the same average frame rate as everything else in BF3, yet it does absolutely horrible in frame latency. And then all the pro-Intel guys who think every Intel CPU is amazing post average frame rate numbers and go "see how wrong you are!!?!?!?!?!?!?!?"
I've been looking for a review too forever. It shows Intel Hyperthreading really messed up frame time in a game on a somewhat modern Intel HT implementation (I do realize HT got better after Nehalem). It's Chinese and I can't find it anywhere.
It's sort of funny how those sorts of things never make it to larger websites, huh?
gamerk316 :
de5_Roy :
you guys know that once mantle goes wide and dx12 comes out, core i3 and fx will become interchangeable. i3 will still have worse non-gaming app performance compared to hex cores and higher. fx will still have higher power consumption and higher app performance. reduction of cpu bottleneck doesn't help amd alone.
To be fair, the i3 can spit out good FPS numbers because over a long time period, any flatlines get averaged out. But minimum FPS wise, and latency wise, its going to lag behind FX, even if it puts up similar FPS numbers. I'd still be interested to see what one could go at about 4.5 though...
szatkus :
8350rocks :
@juan: we will see who is right when the time comes. Though I do not know if AVX3 will be in or not.
@szatkus: JK says they will be better by skylake. My source says even if they fall short of predictions they will be "on par".
As for limited resources, they had less when JK designed K7/K8, and HTX...so...what?
You all are so caught up in the R&D budget you are forgetting the PEOPLE. These are not your average engineers. Every project leader at team green right now is the equivalent to a Michael Jordan caliber player at what they do. If the bulls had paid MJ less money, would he have been any less proficient at playing basketball? No. I understand your concerns, but these guys are BRILLIANT! You can spend less on R&D when you know what works to begin with...
You're completely right, but one thing. We don't know what is performance of Skylake (and probably also JK). I'm 90% sure that will be Haswellish +7-12% or less, but chances are that will be a huge jump in performance. Declaring win 2 years before isn't good idea, you could break some hearts of AMD fan(boi)s
8350rocks :
@juan: who said the dies were no longer monolithic? If that is your speculation, you are using a minnow bucket to hold the water.
Yep, like I said before I don't see anything wrong in properly implemented CMT.
Skylake doesn't have that much to improve performance on. And, the big elephant in the room regarding Intel is that every generation since 32nm, they lose maximum overclocking potential. Before 32nm, they were pretty consistent at 4ghz. But, it's not really maximum overclocking potential, it's a chip built on a process that simply doesn't clock as high.
Meaning that if this trend continues, Intel will continue to lose clock speed, and eventually it could get bad enough for them to have to lower clockspeed on stock products on the high end.
People don't want to admit this. When Haswell and IB (both 22nm products) didn't clock so well, every Intel guy jumped up and blamed the TIM. A few delidded and had good results (but not all of them), so everyone just assumed it was only the TIM.
And then Intel released Devil's Canyon and everyone was disappointed.
Unless my memory is wrong, 32nm SB was good for 5ghz, 22nm IB was good for about 4.5ghz, and 22nm Haswell is good for about 4.5ghz.
Are my numbers about right? I want to do some math on them and make some guesses as to where 14nm clocks would end up but I want to make sure we're all in consensus here.
source for SB needed!
Haswell:
http://www.overclock.net/t/1411077/haswell-overclocking-guide-with-statistics 4.53ghz average
IB:
http://www.xbitlabs.com/articles/cpu/display/core-i7-3770k-i5-3570k_9.html
If Intel loses 10% maximum clockspeed potential with each die shrink, we are looking at 14nm Intel parts that don't make it past 4.05ghz on average. if AMD could keep clocks how they are now, they could end up with a 10% to 20% clock speed advantage over Intel.
I don't have high hopes for 14nm Intel as far as HEDT goes. I do have high hopes to see what Intel fanatics blame the poor clock scaling on this time. Perhaps we are going to go back to a time when Intel breaking 4ghz is a rare treat?