JAYDEEJOHN :
Tell me one thing I posted in that long post thats wrong? You dont get it do you?
Why do you think nVidia is becoming more cpu like? Why do you think compute shaders are here? Why do you think WDDM1.1 is here and does what it does?
I think some people really dont see LRB for what it really is.
Yes, itll render games, and probably be decent at it, but wont shake the world.
But Intels intentions arent there for only that.
This isnt about whos right, its beyond that. Think, why would Intel invest billions of dollars for a gpu only? It doesnt make sense, theres better ways for your money to be spent, right?
Like I said, AMD has been slammed for doing what Intel is now doing, and yes, its cost them, but again, thats not the point, and if people cant truly see the reasons for ATIs acquisition and LRB, and the untold billions of dollars spent for them, besides making gpus, in a field/market thats struggling, theres not alot I can say besides just wait n see
The reason Nvidia is going towards more general processing is because they realize they simply can't just rely on selling high end graphics card to power users in order to survive. Now that Nvidia is almost out of Intel's chipset market, they need to find new revenue. Aside from that, Nvidia realized they also need to provide a platform where consumers can just buy a platform, not just components. Personally I think that attempt is quite futile, but we shall see.
Make no mistakes, using GPU solely as a general processing tool is as smart as using a sports car for grocery shopping. Yes, the GPU architecture does benefit in some (read: some) situations, but certainly not every one of them, or even most of them.
This is why Intel and AMD both invested a lot of money in developing a general purpose GPU,
so they can be integrated into their CPU platform. GPGPU is not here to stay, but a hybrid CPU/GPU is. No one is stupid enough to buy a graphics card, just for the sake of doing some processing. Its just not economical.
Therefore I see absolutely no reason why AMD would want to continue down the road of GPGPU with its Radeon 58xx lines, even went as far as to compare a 5870 to an i7 975. Yes, its throughput is very impressive, but without the right software, they're just as good as none.
JAYDEEJOHN :
Why do people think BD is delayed? This is THE transition, the reason Intels spent its own billions.
Tell me the last time Intels gone out and hired hundreds of new people, spent over 3 years dev time, and plans on keeping all those people? And guess what, they dont come cheap, nor does all those man hours thatre adding up each day, besides the fab costs.
If people want to believe so its Intel can say they too can play Crysis, then theres not alot I can say to convince them otherwise
I agree with you wholeheartedly. I mean, how dare they? How dare they even remotely suggesting that AMD may be delaying
THE TRANSITION? I mean, look at Barcelona! It was
THE TRANSITION for them in 2006, with its 40% better performance across the board... oh wait, that's not right, is it?
Given how bleak AMD's financial condition now (losing money since 2006,
consecutively), and with the 1.5B of senior notes due in 2012, I really have some doubts as to how much AMD can pull off. Although some unconfirmed reports said that Bulldozer will be a game changer, and I would be very impressed to see it being materialized, only time will tell whether Bulldozer indeed lived up to Hector's hype. Saying whether Bulldozer will be delayed or not is just pure speculation.
Oh what? You said Abu Dhabi will bail them out? Remember that ATIC is actually in a joint venture with AMD, not AMD's backer. It just took the fabrication plants off AMD, and jointly invest money in the new Global Foundries. I miss the part where the deal explicitly said that ATIC will give AMD money.
But again, this is AMD! AMD can do anything! I mean, this is the company that claimed that the first silicon of Bulldozer will be made on 45nm process node, and launch in 2009! oh wait.. that's not right again.
JAYDEEJOHN :
I just dont understand people thinking LRB is on time, isnt costing a ton of resources and is for games only.
Its late, the bug fixes are being done now, the first spin sucked bad.
That's not what Jack said though.
Here is what is impressive
* A completely x86 based architecture rendering animated/detailed images. What I mean by this, on today's standards, you have basically one type of GPU architecuture. Oooodles of SIMD cores doing a few very simple things. Larabee is a completely different beast altogether, and what we see here is x number of x86 cores with massive vector reworked SIMD which is very wide. Proof of concept is very important.
* This is on early silicon, how long ago that was and where they are now is not know, but to get this on early silicon is also good.
* What they rendered in the demo was not your standard rasterized image, but ray traced, and though you can see frame rates are something around 15-20, that in and off itself is impressive -- go to conventional rasterized rendering and a) you lose a butt load of quality b) computationally frame rates will be about where today's high end cards would perform, i.e. if it were possible (i.e. software actually existed for real time ray tracing), nVidia's or ATI's top cards would not be doing much better.
Hmm... should I trust him, the ultimate, rabid Intel fanboy, or you, the Mod of THGF?
