GT300 wins over 5870

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


I could not agree more. I got burned today like so many lately but with a much older card with the age old problem of incompatibility the first gen GX2 cards. Not much help thus far with my vintage 7900 gtx duo sigh got one of my dream cards and can't use it. Look at my avatar its the card in the middle. 🙁
 

Thx, its exciting time for enthusiasts 😉


To post rumors is fine, it generates traffic, etc., but to post nonsense isnt very smart for the long term, because nobody will take him seriously. For example Charlie is biased, but often he has grain of truth, and if you visit his forums, he has quite a lot of readers who respect him. Does ANYONE respect Fuad? Anyone? If he keeps posting nonsense, he will lose not only the last bits of respect anyone would have, but also the readers.
 
Nvidia needed a die shrink to make a dual gpu 1.4 billion transistor each card.

If the G300 is 3 billion transistors, it will need another die shrink before being made into an X2 card. If G300 is 2 billion transitors, it's slower than the 5870 anyway, and the g300 x2 will be slower than the 5870 x2.

ATI will hold the performance crown with the 5870 X2 for months, possibly even as long as a year.
 



you only want to crush nvidia you are scared of them, what a stupididity
 


And you know that because you looked into your crystal ball to see what the GTX 380 will bring?
 
I would still suspend judgment until the real ones are released and reviewed. No point in mumbling with uncertain and unconfirmed "facts". Besides, anyone could say anything about something that is currently in-existent.
 



I know that because the HD5870 benchhmarks were done using Phenom II 965 (not OC'd). Think about that for a second before responding.
 

sources? links?
 
HD5870 Crysis Benchmark Score
CPU:AMD Phenom II X4 955BE CPU: AMD Phenom II X4 955BE
Win 7 RTM Win 7 RTM
VGA:HD5870 1GB VGA: HD5870 1GB

Crysis 1900x1200 4AA+16AF DX10 Very High Crysis 1900x1200 4AA +16 AF DX10 Very High
min:30.** min: 30 .**
avg:43.** avg: 43 .**
max:54.** max: 54 .**
 
To confirm if you are saying bullsh*t.

And btw, until several sites posted their versions of the review then I won't commend with your views. Thoughts based on just a single source is immature.
 



Yeah it was bulsh*t - it's not the 965, it's actually the 955BE oops. My bad. Though from what some sites say it was OC'd to 3.5Ghz.

 


Doesn't matter whether the CPU was overclocked or not: the source is a shady rumor (there are NO official benchmarks out yet) and it doesn't say jack *** about the GTX 380.
 


Why will the GTX 380 be better than HD5870? What will it do that HD5870 won't do? Cause from the looks of things - HD5870 can play anything thrown at it + support 6 frameless LCD's.

Tell me what you expect the GTX 380 to do!


 


You could have asked the same question when the GTX 295 was about to be released: "what will it do that the 4780x2 won't do?" In fact, ATI will release a 5870x2 so obviously there is a demand for cards that are more powerful than the 5870.
 
I for one fully believe that the GT300 will be faster then the 5870. I don't believe it will be better. If the 5870 has double the SP at 1600, and if the GT300 more then doubles their count to 512SP, then it stands that the faster GTX280 upgraded card will be faster then the 4870 upgraded card. If the GTX280 is faster then the 4870 and you more then double on the faster side and only double on the slower side, the slower side will still be slower.

This doesn't mean the GT300 will be the better chip however. It might consume more power, generate more heat, cost more due to the huge die size, and if it scales down as well as the GT200, it will leave Nvidia without a midrange and lower DX11 card. Unless something changes that we don't know about, I expect the GT300 to be faster then the 5870, but will probably be worse in every other measurable way.
 



The only reason they released GTX295 and 4780X2 is cause GPU were bottlenecking the poor i7's. Also nothing on the market could answer "Can you play Crysis on it?" So they released them. The problem now is that you can play Crysis on the HD5870 with Phenom II 955BE - the only reason AMD may release the other higher end parts is so that all Phenoms get good fps in games etc.

But if you own an i7 you'll get about 20% better results anyway. So I'd guess 50-60fps (avg) for Crysis with this card.

In time - there will be a more demanding game released. HD5870 pwnd Crysis!!!! 😉
 


The HD5870 is actually an upgrade of the 4870X2. Think about it.
 


As we all know a lot of people buy hardware they don't need, they just want the latest and greatest and will pay for that, that's why there will be a 5870x2 and a GTX 380 and yes, there will be more demanding games in the future.
Then there are people who won't play at anything lower than 2550x1600.
 



Also you bring up good points. The problem is not creating a fast card - the problem is creating a fast card under a certain power envelope. The points you brought up are the exact points I was mentioning. AMD managed to improve performance dramatically while keeping power on par with HD4870. In fact, one could argue that it could potentially be more efficient with the 28W idle.