GT300 Additional Info!

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Solution


More like a Valentine's day gift for your machine. Guess you didn't read the Anand article posted above;

"The price is a valid concern. Fermi is a 40nm GPU just like RV870 but it has a 40% higher transistor count. Both are built at TSMC, so you can expect that Fermi will cost NVIDIA more to make than ATI's Radeon HD 5870.

Then timing is just as valid, because while Fermi currently exists on paper, it's not a product yet. Fermi is late. Clock speeds, configurations and price points have yet to be finalized. NVIDIA just recently got working chips back and it's going to be at least two months before I see the first...
I hope those research people in the scientific community get some large grants and spend alot of money on tesla cards as they are going to need it.


I doubt it.


I've been asking our software vendors and been told "nope" repeatedly in the past (most recent as of a few months ago).


While the theoretical gains are there for proof-of-concept academic problems, it seems when the problems scale to industrial or large-scale academic, they disappear for the kind of stuff I do (CFD).


I'd love someone to port OpenFOAM/Fluent/CFX/StarCD to OpenCL or similar, so run-times could be cut - but if the gains are not there (despite what Nvidia say), then they won't.
 
So, are we seriously saying nvidia has failed to meet dx specs, at least in hardware yet again?

Essentially, yes, but of course the current reply is that 'it will be able to run DX11 calls and do all the DX11 stuff.... just like any other DX11 part', the last part of course omitting the potentially caveat 'but only slower'. But with that rational, any computer with a CPU could emulate DX#, but that's where the whole capable and compliant thing gets blurry, especially since being ultra-technical it is likely going to be both capable (although somewhat softy) and likely compliant as well (doubtful that they wouldn't get the stamp of approval). But if it doesn't have dedicated hardware, then isn't any compute shader capable part going to be capable outside of strict in hardware spec?

I hope for their sakes they are playing a long game here because in the short term this is going to hurt, badly.

I agree, but in the even shorter term, people won't know the difference, and in my opinion, their strategy is to get their nose pointed into the general computing direction and get their experience there rather than perfect a graphics oriented gaming card. Which to me means a ton of computing power but nowehere near the gaming performance per transistor as some people would've hoped for.

Of course all their demos will be ones that show off some kind of physX solution or ones that use OpenCL in additional.

I hope those research people in the scientific community get some large grants and spend alot of money on tesla cards as they are going to need it.[/quote

Yeah, well the Oak Ridge boyz already placed an order, but they really order anything and everything and have deeper pockets than either IHV.
 
i'm sure this will improve in the future. link to image between dx10-11
http://www.firingsquad.com/media/gallery_image.asp/2039/6

I couldnt tell the difference between it or perhaps, my eyes arent up the level.

The difference between dx9 and dx10 in crysis. i could tell right away. it's god like.

Greatape is probably right that nVidia might fail to perform DX11. i'm sure nVidia will improve on there hardware also.
 
I just had another thought in the bathroom. The G(T)300 could be the first totally forward card. Think about it. If they are going to handle DX11 effects in software, then they could probably do the same thing with DX12, DX13, DX14, etc. All Nvidia would have to do is update the drivers to handle the new effects. I don't think the G(T)300 has enough power to run the game as effect all on their 512SP/CUDA processors, but I can totally see a future G(T)4/500 on 22nm and 1024+SP handling things ok. And the ability to run any new game with just a driver update sounds cool to me.
 
Unfortunately though, they would have to update the drivers. And if you were NVidia, would you really do something so that we didn't have to buy more of their cards? Also, I'm hoping DX11 will last longer than DX10, more like DX9; so when DX12 comes around, hopefully a 512 or 1024 core GPU will be obsolete.
 



ATI's GDDR5 (first generation) was plagued with high idle power requirements. Now, ATI have clearly fixed that as the 5870 idles at 27w. Lets see how nvidia does with their first attempt at GDDR5.
 
Well, after reading this article I can honestly say that Nvidia is taking a huge gamble for their next generation...

There is the possibility that even with it's unconventional design/architecture that it may still run games very well, but unless they can keep costs in line it will need to be ridiculously fast to justify higher pricing over ATI's cards.

However, I wonder if Nvidia might be willing to take a beating for the next year for a more long term goal? With LRB due sometime between now, soon, and later, we're going to see a third player in the graphics market with money to shoot around that Nvidia can't even hope to compete with. We've seen for years that "TWIMTBP" has been a thorn in ATI's side with games being optimized for Nvidia cards and often having better compatibility upon release.

Maybe, and this is PURE speculation, but MAYBE, Nvidia knows that it can't compete with intel the same way it has with ATI. If LRB and G300/GT300/F100/etc have more similarities than differences, isn't it possible that "TWIMTBP" jumps to a whole new level? If Nvidia and Intel have similar architecture (CPGPU) and with a massive amount of capital supporting them, not to mention Intel CPU's, that game developers might end up coding to the strengths of Intel/Nvidia?

Obviously I'm just trying to be objective, I have no loyalties to any company, it was just a thought that came to me...
 


I think a lot of people wrongly assume that size means more power, when it doesn't always. But what happened last series was down to the move to GDDR5, that is why the ATI's sucked more power especially at idle.

ATI have fixed it pretty spectacularly, Nvidia need to get it right very first time. Something else they need to get right first time, you see this is why I'm very skeptical?
 
So what we saying now then Nvidia beat Intel to releasing LRB ?

I would argue yes. The fact that they built G(T)300 to support so many CPU programing languages seems that they want to have developers be able to support it easier. If people are used to coding in C++ or Java, why have it run on the CPU if it runs faster on the GPGPU? Nvidia has made no secrets about their desire to have a CPU, I think this is their first shot at it.

can anybody explain to me why a lot of people here are thoroughly misinformed when it comes to power consumption?

e.g. : "it consumes a lot of power because its nvidia"

Probably has to do with the much larger die size. Nvidia's chips are physically much larger then AMDs. Most probably assume that larger = more power. If Nvidia can turn off enough parts of the chip, their idle might be low. If they use an efficient design, it might consume less then AMDs chips. What is a fact is that if Nv die size > AMD die size, they will get less chips per wafer, so each chip will cost more. (assuming same yield.) You see this most often with Intel and AMD when Intel was at 45nm and AMD was at 65. Intel could simply get more chips per wafer. Seeing as Nvidia and AMD will be using the same 40nm TMSC process, I'm assuming yields will be the same. If Nvidia's die is twice as large as AMDs, then they will have half as many available chips for sale.
 
It is a total fake, and pretty bad at that. If you look at pic 2 you can quite clearly see that the pcb runs out of room way before it's meant to - thats why there are features right at the very edge.

Maybe this is real and the card is actually 15" long but that had to be cut off the end lol.
 


Yeah the self-shadowing is pretty apparent (and effective) and the transitions are less stark. Somewhat subtle, but we should not only see visual improvements, but speed improvements. That we're seeing them prior to the API's launch bodes well for DX11.
 


Except that you could've done alot of that with the HD4K and HD5k with Open CL and compute shaders, and same to a lesser extent with the GTX2xx and HD3K+, but they wouldn't be very effective at it, it would be similar to doing multiple passes to achieve something in SM2.0 that you could could do in one pass in SM3.0, but unless it can do it fast enough it's not a benefit anymore than CPU rendering, which we could do for even longer.
 


Yeah if you look at it, there's an 8 pin typical power connector mount... where there's nothing, and then the start of a 6-8 pin mount that is cut-off at 2 pins, and neither the 6 pin at the top, nor the 8 pin at the back line up with any power structure. :heink:

I guess it's easy to cut holes in the pcb for circulation when there's nothing critical on the board you need to worry about cutting. :pt1cable: