[citation][nom]IndignantSkeptic[/nom]@PreferLinux, I'm shocked programmers still program in low level. Also how can OpenMP not work on GPUs when a section of OpenMP is OpenACC?!![/citation]
Yes, they do – how else do you intelligently optimise something (a compiler can't do everything, and definitely not as well as a person). Besides, what is high-level? C and C++ aren't really (are in some ways, but not in others), but are very common especially for performance-sensitive code. Anyhow, CUDA requires its own language, and I don't think nVidia would let you use anything else... As for OpenMP on GPUs, I based that on the following from Wikipedia:
Pros and cons
...
Cons
...
Can't be used on GPU
[citation][nom]blazorthon[/nom]I doubt that you could do a whole lot with that much performance even if we made games that could somehow utilize that many cores. You might be able to do something like one hell of a physics processing game with very high FPS that way, but otherwise, IDK how much you could actually do with it.[/citation]
Run your software renderer with it, maybe? By the way, it does still have Larabee's graphics-specific parts, just turned off – at least according to (I think it was) SemiAccurate.
[citation][nom]technoholic[/nom]Excuse me, but aren't GPUs more efficient/optimised for paralleled work loads already? Is GPUs' nature (more coars + higher memory bandwidth of graphics memory + their abily to act alongside a CPU) not more suitable for parallel computing? Also GPUs are relatively cheap and widely available even for a home user. Even a home user can do accelerated work using their GPU. IMO x86 has got its age. Intel doesn't want to adopt new tech[/citation]
Not really – they have disgusting performance per thread, and very limited operations compared to this kind of thing. Hit a serial portion of code (and everything has them) on a GPU, and your performance drops through the floor. Not quite the same with this, and besides it'd be a lot easier to prevent that happening. This has over 1 TFLOP peak DP performance, so I hardly think performance is a problem – and the power usage is about the same too. This has the graphics memory bandwidth, and will "act alongside a CPU" just fine. So what about the availability? Far more people could program for this than for GPGPU.
SemiAccurate mightn't be the most reliable source of information, but they do know about tech. Check out their articles on the Phi for some ideas...