GT300 series, real specs revealed.

Looks very promising. At least then we will have something that can crash crysis.
By the way have you seen the RV870's specs? It looks like there will be a serious battle in the GPU market. I cant wait :bounce: for the bloodshed/bleeding edge graphics and the competitive price cuts! I am going to get myself one of these for very cheap 😀
 
I'm extremely excited for these cards, I just wish it didn't seem like nVidia was going for the huge chip again and would go the way of ATI with efficiency, not brute computational power. They might still though, either way I'm excited.
 
two slightly smaller chips seem to scale poorly in most apps - thats the biggest issue, they need some kind of third "master" chip to properly and efficently use all that power etc...

wonder when larrabee will show up too...
 
AMD have always had price/performance their strategy to win over the GPU market as opposed to Nvidia's brute strength strategy.
The info states that the GT300 single chip card will yield 3+ T Flops as opposed to AMD/ATI's 2.16TFlop 5870. The GT300 will definitely be riding on a 512bit memory bus and the 5870 a 256bit wide memory bus.
Clearly the info points out that the GT300 will be quite pricey due to production cost.
 
It seems that the new nVidia flagman will again be HUGE. I dont know how scallable will this chip be as if they want to be in the mainstream segment it will be too expensive to make such huge chips and sell them cheap. On the other hand ATI have great advantage in this area. I even expect the 5870 to be less size than the 4870 and still be around 50% faster.

 
now i'm hoping these next-gen cards from nvidia will sort of allow me do some SQL programming via some native visualc++ plugin different from CUDA. just in time for my thesis. imagine how fast a single search query @ 3tflops would be, im thinking i'll be getting an A+ for that.
 
Unbelievable hoe GPUs are bucking the trend of micro-sizing, shrinking. Motherboards, CPUs, RAM all getting packed into smaller and smaller PCD yet GPU cards are growing. Why not make an LGA style GPU chip and HSF and use that method?
 
What I wonder is, what are Nvidia going to do at the mainstream?

Even at 40nm eventually, the G200b is going to make a pretty expensive mainstream card. The reason Nvidia are losing the mainstream market now is because their parts cost too much to manufacture.

Anyway, at the enthusiast end the 5870 is going to pretty much annihilate everything currently available, until Nvidia release the gt300, then ATI double up with the 5870 X2 and....wait a minute doesn't this sound awfully familiar?
 
Seen both specs and I am also really excited. Looking to see how the new MIMD fairs over SIMD and where ATI will actually sit with their final ROP and TMU count
 
Excited to see the new architectures for sure. Hope they both end up relatively on par so we see some nice prices. Seems the relation between the 300 and 5000 series will be similar to the 4000 and 200. sure hope so as I love price wars :)

Either way, the 5870x2 and the 380gtx sure will spew out a ton of FLOPS
 
Yea, dont take these specs as absolutes, they are speculated specs if you will.
As far as slightly smaller vs behemoth chips, xbits has an excellent review showing the 4890 in CF holding its own vs the 285 in SLI.
Lets face it. nVidia has turned towards the gpgpu solution, and thats a major part of their strategy as well as their die size. Theyll be going DP in a major way for this, and that costs realestate on the gpu. Theyre trying to drive the market in this direction, trying to preempt LRB, and want to compete and lead in that market. What I wonder is, will they continue heading in this direction , having both gpgpu and gaming perf together, or make 2 units, each aimed specifacately to each market?
 


I just read that review and it showed that 4890 crossfire and GTX 285 SLI are "identical" at high resolution with each having a game that runs decidedly better on that particular platform. (both are terrible games to be honest.) All I can say is wow, I might be trading my 4870 X2s in for 3 4890s!
 
Wow. These specs are awesome. Hopefully Nvidia and ATI will have this "Price War" so I get get myself one of these puppies! :sol:

I think this pushes me to wait to do my upgrade. Gonna get windows 7 64-bit, overclock my C2D E8400 to 6Ghtz, go to 8gb of ram, and most importently, dump my 8800GTX and get a GTX 380 or a 5870 X2!

(Yes, I mean 6 Ghtz!) :sol: :sol: :sol:

:lol:
 


Who ever gave him a thumbs down needs there head looking at. Theo Valich is one of the worst tech reporters I've ever come across; mis-leading, biased, badly source, poor written are just a few of the terms that can be used to describe Mr Valich's reporting style. He even worked for Toms Hardware for a bit as a reporter, I haven't seen a news item from him for ages so I can only assume that someone fired him (good on them).

It's going to be interesting to see just how much faster the GTX 380 is then the HD 5870 and how much more it's going to cost to buy. If the ratio's are the same as the last years (GTX 280 Vs HD 4870) then I'll be buying a the Radeon to replace my ageing 8800GT, if it get thumbed into Oblivion (8800GTX Vs HD 2900XT) then I'll be looking at getting a GTX 360 or equivalent.
 
To Do List:
-need to get USB wireless adapter
-need to get Crysis Maximum Edition
-need to get Razor Lycosa keyboard

Wait a Year:
-Get a 1920x1200 120Hz 3D screen and Nvidia 3D bundle
-Get GTX 395
 
I suspect most of what we will see in these speculated specs will be aimed at gpgpu usage, not that the G300 wont be a killer card, but using these numbers, which are pure speculation at this point, and saying theyll be directly used for rendering/gpu use mainly/only wont happen. Look at the difference in transistor count and die size between the R700 and the G200. The G200 isnt almost 50% better/faster than the R700, more like 15-20% at best, and loses at worst, depending on game. So, I say take this all with a grain of salt
 
They will when they have to. What is the difference? If they used GDDR5 with 512bit they would have twice the bandwidth they need and cost 50% more to produce..

When they need the bandwidth they will go 512.. until then enjoy the price savings. Why would you want to pay extra cash for bandwidth you don't use? I assure you the ATI techs know what they are doing, and won't be puttin gout a card that is starved for memory bandwidth, or one that has a hell of a lot extra. I'm sure that should nvidia have gone with Gddr5 this gen they would also only use 256bit too. They are both in the business of making money, not adding unbalanced features to increase the cost for no performance gain.