GT300 Additional Info!

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Solution


More like a Valentine's day gift for your machine. Guess you didn't read the Anand article posted above;

"The price is a valid concern. Fermi is a 40nm GPU just like RV870 but it has a 40% higher transistor count. Both are built at TSMC, so you can expect that Fermi will cost NVIDIA more to make than ATI's Radeon HD 5870.

Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first...


Yeah and of course he said his peace on the matter over at B3D (seems a little defensive, but that's somewhat understandable after all the flack he's taken and still continues to take);

http://forum.beyond3d.com/showpost.php?p=1342461&postcount=59

He's still saying very poor yields, which is kinda confirmed by JHH holding up an F100/G300/GT300 card, yet unable to actually plug one in. :heink:

If you look closely at the fan assembly on the card, it doesn't look like there's anything behind it, with no circuitry on the grey stuff, and what appears to be background light coming straight through a gap in the card? :ouch:

I love how someone in that thread manages to complain about no die shot of the RV870 as if nV making the view of the naked chip available were preferable to making the actual card available. 😉
 


Yeah, but the tessilator is supposed to reduce the workload overhead, so having that in emulation within the compute cores seems like it would have a performance penalty.

So all the bad-mouthing of the difference in teselators between HD3&4K with the DX11 spec seems to be pointless if it's going to allow for the hull and domain shaders to be emulated by general compute shaders.

It'll be interesting to see how that develops.

 
That's what I was thinking. Is the tess the only thing they are handling with SP or are there other DX11 features that will be handled this way? Will the compute shaders be handled like this? Sure they more then doubled the SP, but if they have to stop and handle all/many/some of the DX11 features in the SP, whats left to render the scene? The direction they are taking could work, but I'm not sure its doable until they hit a smaller node, and get WAY more SP in there.
 
I think they could be making the best of a bad situation as far as the tessellation goes. Technically the card is capable of doing it but then if you believe Nvidia they are looking to go into other markets anyway. And the key thing for me is that tessellation is most probably not going to be that important, at least not for this generation. I'm sure they haven't forgotten how to make a decent 3D card and wouldn't be surprised if they are working on something in case the attempt to go into other markets goes belly up.
I also don't think DX11 is that important either, not as far as the cards go. Yes we need the games from a need to drive the new tech point of view, however from the hardware point of view ATI find themselves in the same situation as Nvidia did with the 8800GTX etc, so the pendulum swings again.
Its ATI that have a card that can destroy today's games, is next gen DX capable when its not too obvious exactly what that's worth, while Nvidia has had to redesign and has nothing close to release on the horizon.
They seem to be trying to keep their options open with this chip and it remains to be seen how much if any the huge lean in the computing direction will take away from the chips gaming performance.
Im looking forward to seeing how the thing workks

Mactronix
 
i get what Nvidia is trying to say about DX11 is no big of a deal. i personally agree that Nvidia is right on this one. i can hardly tell the difference between dx11 image compare to dx10. the gap between dx10-dx11 doesnt seem to be noticeable compare to dx9-dx10. from what i recall HDR, water, and blur are all very amazing to me, but dx11 hasn't impress me. maybe when dx11 improve, i'll change my mind.

However, I disagree the point that Nvidia said DX11 is not important. in the economic point of view, the only time when you said something is "not important" is when your product arent good enough to compare it to other company's product.
 


Where have you seen DX11 in action then ? Links please :)

Mactronix
 
Of course you can't tell the difference between DX10 and DX11. Changing the API does not inherently change the graphics. Even early DX10 looked the same as DX9. Until a version of DX is almost obsolete the difference isn't breathtaking. Once again, look at the early DX9 games and compare them to Crysis in DX9 and tell me there's no difference.
 
The point about nvidia constantly saying stuff 'isn't important', is they just keep saying it and never actually moving anything forward.

The reason why they are behind in dx11 is because dx10 wasn't important, then dx10.1 wasn't important.
 
How do you figure John? They didn't say it would be smaller then the 5800, and the only reference to size is the size of the card. There is a huge difference between the size of the card and the size of the chip. Just because the card is no bigger then the competition or the previous gen doesn't mean the chip isn't larger/more costly to produce.
 


it says "The card was just a normal looking dual-slot card that was no longer or larger than anything on the market today."

5870 is 11 inch and 5870 will be 5870x2 12 inch long so this gt 300 will fit in any larger mid tower case... i didnt mean on production cost
 
MILLIONS OF WAFERS DIED IN ORDER TO BRING YOU THIS HEATSPREADER TODAY.

nvidia_fermi_chip.jpg

 
it says "The card was just a normal looking dual-slot card that was no longer or larger than anything on the market today."

Which means that TESLA card you saw is the SAME size as an HD5870, since it's actually SMALLER than a GTX280/285.

Also you do realize that's the computational card, not agraphics card, so it doesn't have the full requirements of the graphics card. So if this is the real deal, and not a mock-up, which it still looks like with a C1060 housing popped onto a blank PCB.

So don't expect some small card out there. I expect it to be at least as big as a GTX280/285, which would make it bigger than the HD5870.
 


Isn't that a heat-spreader from an FX?

The return to the 'F' nomenclature does not bode well for nV. : :??:

Anyways, while DX11 isn't mandatory (intel sells alot of chips that are barely DX10 compliant through emulation), it does show what nV's situation is, and that's slipping further back down the graphics technology ladder. Everyone said they would forgo DX10.1 to dedicate their efforts to DX11, well that didn't seem to do that, since they ended up with essentially DX10.1+ instead of full DX11 in dedicated hardware. At what time does it become, "well visuals aren't important in a graphics card, it's the computational power that matters most..." so the return of all software rendering? :pfff:

If we wanted co-processors, we'd get a second CPU, and likely for more immediate global benefit and cheaper too.

The one area I can see this being helpful would be Ray-casting & Ray-tracing, but really I don't know if that design would benefit more or whether the sheer numbers of the 1600 procs in the HD5870 would be best for that task.

It's the FX series all over again, and I expect to see the return of floptimizations and paying companies to shackle their games to try and help this situation and make them seem 'forward thinking' by not including current technology. :heink:

However that being the case, why would you upgrade from a GTX280 in SLi then? The G300 is going to fall short in performance, and all the things it brings to the table that are better are 'not important'. It's a weird strategy, and banks alot on PhysX and other general computing functions in order to sell graphics cards.

It does clarify some of the statements made by people like Tim Sweeney earlier in the year, he was obviously privy to nV's plans back then and was setting forth the PR for the case, where it wasn't about DX or OpenGL, but about general computing.

The main thing will be if there are any early applications or games that highlight either the benefits or deficiencies of such a design, it took a while to show the shortcomings of the FX series, and only about a year and a half later were people regretting their choices. It may be the case here too, where the decision isn't fully felt until we're near the replacement or at least refresh for both cards.

But for anyone looking at a longer term mid-range card, there's really only going to be one choice from the looks of things, and that would be the HD5K series, where the benefits of both computational power and graphics features (where DX11's efficiencies will be most needed) make the old G2xx line pointless, and the new F100/G300 series out of reach.

It's a strange decision, but at this point in time nV is stuck with their prediction a year and a half ago that it would all be about compute power and not graphics, there's nothing else they can do now other than to market the hell out of that concept, and that start with yesterday's vapour-launch. 😗
 
Something else to consider and i hav eno clue about this at all.
People are being quite quick to say that AMD will get a foothold as the devs will only have their cards to design DX11 games on. Thing is who knows how long Nvidia tied certain game houses into the whole TWIMTBP thing ?
Could be they would love to use AMD chips now but cant so will hav eto make the best of it ?

Mactronix