MU, thanks for the post. Lots of info. Most people forget that GPUs tend to be the major bottleneck in games these days while most CPUs are more than enough and under utilized.
There are a few games that are not coded well that will need a faster CPU per say and thats where Intel may shine but normally Intel and AMD do the same in gaming at high resolutions.
Except for overclockers, IB is not a step sideways or even backwards. The 3770K is a bit cheaper than the 2700K, slightly outperforms it in CPU benchmarks and greatly outperforms it in QS and GPU. Plus there's USB 3.0 and PCIe 3.0. Besides, the Z77 mobos did not appear and then just sit around for 6 months waiting for IB to release, unlike AM3+ and BD.
IIRC SB wasn't that great an overclocker either when it first released - I think the average review pegged it around 4.6 or 4.7GHz. Most people expect IB to similarly improve with time.
While IB for desktop does not make sense as an upgrade for anyone with SB, it does for those of us still using C2Q's 😛..
Thats true. SB was about 4.5-4.7GHz on air, the higher if lucky. There was no stepping improvement but the mobos and the BIOS did get some upgrades (Z68) which may have helped. There were very few SB CPUs at 5GHz on air cooling and when they started to appear, they were using modified BIOS, mostly on Gigabyte mobos if I remember, which gave the extra boost. Some were getting 5.6GHz on air but I doubt temps were within reasonable.
As I said before, there will be a GHz barrier we wont overcome for a few more gens of CPUs. It took us quite a while to get past 4GHz on air, Pentium D was the first to get very close and if I remember correctly it was Core 2 that hit 4GHz air first then first gen Core i and Phenom II. SB was really the first arch to go beyond 4GHz on air.
I think even PD with that new tech will have a hard time getting to 5GHz on air.
Random +1 to valve. 9 year old games from them push 4 of my cores equally. I love good programming.
On topic; I think that saying BD should have been clocked higher is true. Most every FX-8xxx can reach 4.2Ghz or more. I think AMD left the clocks lower is because of the already excessive power use. That being said, the fact that my FX-8120's voltage supposedly goes up to 1.55v under max turbo blows my mind. I could undervolt the chip to 1.1-1.2v with little problems probably. Power use must drop some by doing that, so why doesn't AMD do it?
It depends on the VALVe game but any of them using the newer L4D engine, which will have multicore rendering in the video options, will do this. I remember when VALVe implemented it in L4D and my Q6600 would hit 80% usage on all 4 cores.
I haven't checked usage of TF2 or any game on my 2500K but I doubt its 80% on all 4. But that is the benefit to creating an amazing modular engine and just upgrading it over time instead of just throwing a brand new engine out that takes more time and money to develop and possibly more bugs/issues. I would say that Source is probably one of the most stable engines out right now.
As for BD, the reason why they don't undervolt them or set them to 1.v-1.2v is because they probably wont have as many stable CPUs. There are always a few CPUs of any arch that will allow for lower than stock voltage and be stable. My Q6600 did 3GHz on less than stock (1.325v) and even less than the VID (1.275v) at 1.25v and was 100% stable. Never had a BSoD. I had a QX6850 that hit 3.5GHz using 1.275v, less than the VID of 1.325v. My 2500K is doing 4.5GHz at 1.3v (most need about 1.35v to hit 4.5GHz stable) and idles 1.6GHz at .986v. I seem to be lucky in that manner..
But that said, AMD goes for what is mostly stable instead of the lowest stable. So does Intel.
Because Intel don't want their fate to be in other people's hands, so they do their best to minimise this happening.
By having their own compiler, it puts pressure on Microsoft and other compiler makers to constantly be improving their compilers.
This is part of the reason why x86 has prevailed over many superior architectures.
If you refuse to believe that Intel's agenda is to get everyone to believe how slow AMD is then there isn't any more we can discuss.
How is this either relevant or significant?
Every company tries to downplay their rivals products, yet once again we see this crazy suggestion that Intel is the Great Satan for doing the normal thing.
Because Intel is evil. Yet strangley in all of their marketing and all of their performanc estimates, they never compare their upcoming product to AMD, just their previous product while AMD always compares themselves to Intel in their marketing.
Still I think this topic should saty back where it was, 10 pages + ago. It tends to fire people up.