If absolutely everyone uses MSVC then why does Intel even have their own compiler?
1. Intel's compiler is generally the best compiler for squeezing the last bit of performance out of Intel chips. The commercial-licensed version of Intel's compiler isn't cheap, so Intel makes money from selling their compiler to people who want that last little bit of extra performance.
2. Intel's compiler makes Intel's chips look good (and possibly making non-Intel chips look bad), so if Intel can get more software compiled with their compiler instead of a third-party compiler such as MSVCC, GCC, etc., it makes Intel's CPUs look better than the competition and Intel sells more CPUs.
3. Not everybody uses MSVCC. GCC in particular has a good following.
And here is Intel's fix for AMD's lawsuit for their "defective compiler"
/QxO -xO Enables SSE3, SSE2 and SSE instruction sets optimizations for non-Intel CPUs
This discussion is already been done, ~10 pages ago, but the fact remains that these "games optimized for intel" sabotaged AMD support. Can these games be taken seriously as to how well AMD cpus perform on say the compiler everyone else uses?
Game performance in current games is largely determined by these factors in this rough order:
1. Your graphics card's performance in that particular game
2. Having an adequate amount of RAM
3. Single-threaded throughput of the CPU cores
4. Having a CPU that can handle more than 2 threads
5. How well a particular game runs on your particular CPU's architecture (hits the FPU a lot vs. not a lot, likes a lot of cache vs. doesn't really care, single-threaded vs. uses a lot of threads well, sensitive to memory latency vs. not, etc.)
6. The performance of your computer's platform (chipset, RAM speed [as long as it's not insanely slow], disks, etc.)
7. Your OS/windowing manager/graphics subsystem
8. What compiler was used to compile the game
There is fairly little real difference in current games between current enthusiast CPUs from AMD and Intel. They can all handle games just fine if paired up with a good GPU. Your GPU is going to be the biggest determinant of performance, period. Sure, you have all seen the graphs in game benchmarks showing that a CPU like the i7-3770K can produce something like 20-30 more FPS than an FX-8150 in a certain title...at 1366x768 resolution with low detail where no chip is pushing less than 100 fps. That sort of a benchmark is truly just academic as the game is obviously playable on any of those CPUs (you need a minimum of only 30 fps) and any real person is going to be using something like 1920x1080 where the differences between CPUs is only a few insignificant FPS.
But, if you wonder why the Intel CPUs are at the top of those "academic use only" low-resolution game benchmarks, it is because games generally only really hit two cores of a CPU very hard. SB/IB has better single-threaded performance per clock than Bulldozer and the lightly-loaded clock speeds aren't that different between the two CPUs. Bulldozer's prowess is in heavily multithreaded applications, which most games are not.
If you refuse to believe that Intel's agenda is to get everyone to believe how slow AMD is then there isn't any more we can discuss.
Of course Intel's agenda is to get everybody to believe that they are better than AMD. That's exactly the same as when Ford tries to convince everybody that they make better vehicles than GM and the Democrats try to convince everybody that the Republicans are wrong.
Optimized for Intel does NOT equate to not optimized for AMD.
Mmmmm, that's debatable. Intel was supposed to stop that behavior, but there is quite a bit of evidence they are still refusing to use more recent optimizations on non-Intel CPUs. I can understand Intel not supporting AMD-only extensions such as SSE4a in ICC, but ICC should ONLY query feature flags when deciding whether or not to turn on what SIMD extensions. For example, the post above gives an optional switch to allow non-Intel CPUs to use SSE, SSE2, and SSE3. Meanwhile, current AMD CPUs like Bulldozer support SSSE3, SSE4.x, and AVX in addition to SSE/SSE2/SSE3. Not allowing them to be used on a non-Intel CPU when the binary WILL use them on an Intel CPU is in fact crippling the non-Intel CPU.