Nvidia Betting its CUDA GPU Future With 'Fermi'

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Yes that's really nice for scientific purposes and stuff. Now where's your DX11 GPU and new motherboard chipsets?

I want to see if they can come up with something interesting enough to make them my next choice, otherwise I'll definitely be moving over to AMD/ATI.
 
if NVDA can provide the floating point and processing power required to render in REAL TIME (which is a feat in itself since one realistic image (1 frame, of 60 in 1 second) takes 23 hours on a quad core CPU.) They could take a chunk from intel and amd. I'd be all over support NVDA, but I have been since their first 3D Accelerator (i believ they were diamond at the time, or bought diamond at the time)
 
[citation][nom]Antilycus[/nom]if NVDA can provide the floating point and processing power required to render in REAL TIME (which is a feat in itself since one realistic image (1 frame, of 60 in 1 second) takes 23 hours on a quad core CPU.) They could take a chunk from intel and amd. I'd be all over support NVDA, but I have been since their first 3D Accelerator (i believ they were diamond at the time, or bought diamond at the time)[/citation]
Hell, if this happened nVidia would have a MASSIVE advantage. I'm sure many pro 3D designers will be willing to shell out $5k+ for a card like this considering the time saved in the long run.

Then again, this probably won't happen very soon. Also I'm assuming it will be a true ray traced rendering.
 
I think this is a logical step by Nvidia to head in the GPGPU direction if it's to survive. Hats off to them if they do release the product.

But rest assure AMD/ATI will come out with a GPU in Q2/Q3 2010 with similar features. You never know, the GT300 might not make out to the market by then 🙂
 
[citation][nom]saint19[/nom]This sounds like the new line of nVidia GPU to compete with the new HD 58xx....[/citation]
It sounds like a new gpu line that will crush all of their FirePro 3D line
 
The professional applications and scientific computing market is big, not as big as the video game business, but the current number of professional and scientific applications taking advantage of CUDA after only a couple years is huge, they have an installed base way larger than ATI. If they can keep that base, as they likely will, the entire graphic design, movie production, animation, GIS, oil & gas industries and academia may shift to optimizing their tools for CUDA. I personally think they'd be better of designing for Open CL, so that they can use ATI or nvidia cards for GPU acceleration, it might not be as optimized as if it was specifically tuned for one architecture, but would hedge their bets. Considering we can all by the end of this year have maybe 20 Tflops with 4 x 5870 X2's on our desktops, it's amazing, what would four of these new cards be capable of?
 
I am not a programmer. I am not a scientist. I am not a doctor.

I am a gamer.

Do I really need all of that?

Let's just say I want to play Crysis with maxed out settings on my modest 1680×1050 LCD display.

Do I really need to pay a premium price for a bunch of stuff I won't even need?

Are all of those bells and whistles to make people scream "ME WANT! ME WANT!"?

I see that this product will be very valuable for programmers, scientists and medical imagery and NASA and stuff. But I'm just a gamer!
 
Methinks a higher-frequency binned parts will emerge from ATi as 5890 (1GHz?), and everyone will be happy again. Except NVDA shareholders.

Nothing is known about the power consumption-to be. ATi again has a smaller design that's scalable to X2. 3000000000 transistors will output a LOT of heat. She's gonna be HOT, and she's gonna be expensive, and she'll lose to both 5870X2 and 5850X2 by large margins. Heck, there isn't even a speculated release date here!

Come on, you are gonna make a 3 billion transistor part and disable a billion from them to make a cheaper part? You just wasted so much money on making it! New PCB design cost issues maybe?
 
Looks like the 'Fermi' Nvidia showed off is a fake.

http://www.semiaccurate.com/2009/10/01/nvidia-fakes-fermi-boards-gtc/

Doesn't look too good on Nvidia :)
 
[citation][nom]joeman42[/nom]No, because it uses havok instead of physx. For the same reason Nvidia sabotaged ATI on physics it will be unable to process the effects in Cryengine. The silicon that could have been used to accelerate it (like ATI did) will be wasted idling. Larrabe, on the other hand, being a collection of x86 engines can be reconfigured on the fly to use all of its muscle on each game. Nvidia is going on a proprietary limb which may very well be an evolutionary dead end.[/citation]
=D
I like the idea of Larrabe because of it's massive combatibility and function. Shitloads of x86 cores is as good as it gets.

[citation][nom]saint19[/nom]This sounds like the new line of nVidia GPU to compete with the new HD 58xx....[/citation]
At first galnce at the chart, I though "GTX 300!! YAY!"... Hopefully this was a sneak preview of the (hopefully soon to be released) GTX 3xx line.
 
Toms, i hope you take notice that during presentation the fermi board was a fake board (probably had real ones behind closed doors, or pre rendered the demos) but theres a lot of evidence all over of why they are fakes (at least the one they held up and said " this puppy is fermi"
 
are they running out of stuff to name their products after?? enrico fermi wasn't even involved with semiconductors that much. maybe just because he had the Eg/2 (fermi level) named after him that merits having some advanced piece of silicon named after him, what do i know
 
[citation][nom]Antilycus[/nom]if NVDA can provide the floating point and processing power required to render in REAL TIME (which is a feat in itself since one realistic image (1 frame, of 60 in 1 second) takes 23 hours on a quad core CPU.) They could take a chunk from intel and amd. I'd be all over support NVDA, but I have been since their first 3D Accelerator (i believ they were diamond at the time, or bought diamond at the time)[/citation]

nVidia bought 3DFx...just like they did Ageia. Diamond sells ATI cards... www.diamondmm.com ....same website they've had for the last 10 years. nVidia has always been nVidia. nVidia's first GPU was the NV1 released in 1995....which failed miserably.
 
Since the table compares the Fermi against the current GT200 & G80 GPUs, both of which were used in Tesla as well as GeForce, it indicates that the Fermi will be used by Nvidia's upcoming graphics cards.If these are the specs for the new GeForce cards and the rumors are right about GDDR5 memory then no ATi card can match up to the performance of the next generation GeForce series.But the pricing will still be a factor which should not bother Nvidia fans.
 
Status
Not open for further replies.