"Why hasn't Intel shown any concrete game demo?"
Ohh Ohh PICK ME! It makes sense to demonstrate products to the correct target. Why exactly would you want to show off games at IDF, a HPC event? The last big gaming event was TGS'09, Larrabee B0 samples were far from ready by then. B0 Silicon tapped out the 15th of August. It was a close call they were even able to actually show something at IDF. The next big gaming event is CES2010. So I suggest you drop that argument untill then.
"Larrabee has failed, only 1TFLOPS! HD5870 can do 2.3GFLOPS!!! FERMI IS 8X FASTER!!!!"
Sometimes I wonder wether people post before they think, then forget by the time they post again. That 2.3 TFLOPS is theoretically possible yes, but usually they miss that target by a huge margin.
The 8800GTX managed to score 185 GFLOPS (Single Precision), it however was supposed to be able to achieve 518.4 GFLOPS. The Tesla C1060 (GTX280) promised 933 GFLOPS but only managed to score 370 GFLOPS.
http://www.idre.ucla.edu/events/2009/gpu-workshop/documents/David_Tesla.pdf
Same for ATI. HD4870 promised 1.2TFLOPS. Yet only manages to score 200 GFLOPS using OpenCL, 540 GFLOPS using IL/Brook+, and some guy saying he managed to get 880GFLOPS but no confirmation that this method is usuable in general. What is worse for ATI cards is to achieve higher FLOP performance developers have to revert to proprietary programming languages such as Brook+ and IL. (Hint: Developers don't like that)
http://forums.amd.com/devforum/textthread.cfm?catid=390&threadid=120413&
http://forums.amd.com/forum/messageview.cfm?catid=328&threadid=105221
http://cerberus.fileburst.net/showthread.php?t=54842
Fermi is indeed 8x faster, 8x faster in double precision then its predecessor. It used to be a whooping 78GFLOPS however even at 624 GLFOPS it is still much lower then a HD5870. The only reason it'll be actually used instead of a HD5xxx is because of IEEE and C++ compatible. Yet all of this doesn't matter a single bit for games. Games benefit exactly 0% from Double Precision and IEEE. Infact, they would actually slow it down.
Suddenly that real world 1TFLOPS Larrabee managed to set doesn't sound that bad anymore, huh?
In the end that TFLOPS tends to mean relatively nothing when it comes to games. HD4870X2 had 2.4TFLOPS, while the GTX295 was packing a whooping 1.7TFLOPS yet was showing the HD4870X2 every corner of the room (except the price-corner).
"Whatever, it has 80 cores and they had to overclock it, LOL"
Question! How exactly is it possible to stuff 80 cores on 650mm square die when Intel said that the step to 32nm would allow a 48-core variant? How exactly did they suddenly manage to get 80 in there ... with 45nm? It surely wasn't more chips because it was clearly stated that this was a SINGLE chip solution. Maybe because that lousy journalist said so? Maybe someone should tell him Polaris =! Larrabee.
Aren't you wondering why not too many websites have covered this particular benchmark? Anandtech, Tomshardware, Semi-Accurate, BS^N, Fudzilla? I'm pretty sure the're all under NDA. Read some more forums and you'll notice there are some peeps that know more but can't say. (Hint: Beyond3D)
There's quite some info that isn't clear. Was this a 32-core running at 1GZ or a 16-core running at 2GZ? How much did they overclock it, 15-25%? I'm quite curious to say the least, we'll have to wait this one out.
"Dude, they can't beat ATI/nVidia at their own game, pass whatever you're smoking browski"
So they can't beat them because of their immense experience. Yet if they don't they automaticly fail. Rrrright. All Intel has to do is bring decent performance for a reasonable price to the table. The flexibility alone will draw alot of developers to it.
"It fails because it's Intel!"
Yup, keep telling yourself that. Intel has been on a streak ever since Conroe. Core-family is whooping AMDs ass in performance. The're rocking the SSD market. I can't wait for Light Peak (Can you say 10-100Gbit/s?). Next-gen Itanium coming up (Nehalem-EP/EX)! The're also working on a flexible cGPU as you might know (LRB or something). They also said they managed to crack CPU-GPU-shared-RAM-issue.
Frankly I'm just more interested in what Intel has to offer then AMD. AMD has done nothing but blame Intel for its failures while delivering inferior (but very amazing price/performance I must admit) products. I'm not going to say Intel didn't bribe here and there but AMD went from AMD64-age to where they are now. Nontheless, I'm looking forward to Bulldozer but they are nowhere near as innovating as Intel.
The're still quite some things that we don't know about Larrabee. Will they skip 45nm in favour of 32? At what Ghz are they going to run it? How overclockable are you going to be (looks at Core i7/Gulftown *feints*)? Are they planning to release a Chip-variant aswell? And with Cell development brought to a stop is Sony considering Larrabee? Everything in time I guess.
PS I suggest you look up "Larrabee: A Many-Core x86 Architecture for Visual Computing". There's more information in it then meets the eye.