GT300 wins over 5870

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


No it's not, it's just an upgrade from the 4870, a huge upgrade, just like the 4870was a huge upgrade from the 3870.
 


Are you holding out for GTX380 or does HD5870 meet your needs?
 
Well, I'll go for it this way, If NV manages to release a card by the holiday season for sale, i'll compare it to the 5000 series and buy the card thats best for me. If they don't make it in time, I'm going ATi 5870 for sure.
 


The 5870 will meet my needs (then again, I have a 1680x1050 monitor, so I can do with a bit less than cutting-edge), but there are a lot of people who will wait for the GTX380 to compare it to then 5870 and there will be plenty of people buying a 5870x2.
 
We have no idea when the G300 will be out, and may not know this month at all, which could be a huge problem for nVidia.
OK, going by what weve seen released, of course its been on AMD cpus, and the sources have been pretty decent, if youre into tracking that sort of thing.
OK, now as to the perf itself, I have to agree that the 5870 is an major upgrade from the 4870x2, as it doesnt suffer from dual core problems/losses, and the core speeds are higher, as well as the memory speeds, plus a few other tweaks, and not only perf, but af as well.
If everything is what it seems to be at this point, the 5870 is slightly better than a 295, and possibly in some games way better, and possibly in a small number of games, competitive, but loses by a few frames or two, depending on setup.
If the G300 is going to ever get here, itd be good for nVidia to not go around pushing physx and CUDA, but frame rates and resolutions , since thats still the main focus of gamers, not something else. And then promote DX11 itself, as itll also have a few nice things in it. Having tesselation could be as nice as physx if done properly, and wont effect game play any more than physx will/does, but brings the bling just like physx, but both are costly, the difference is, DX11 can bring more perf, while physx will only require more perf.
So, highend, yes, physx will work well, with diminishing returns from that point onwards, and mid low end, DX11 will lift perf alot, so I dont really understand nVidias postin, other than they dont have a DX11 card ready possibly
 
Its good news that nvidia is not just sitting and waiting to see what ati has to offer. Competition in this battle will be very intresting and will bring price wars in all segments what is good for us


yes nvidia are hard at work renaming there next card as we speek
 


the renaming is just not enough because of 5870 performance and nvidia know that. their just waiting to see official 5870 performance. You have also had a little information about gt 200 series before their release.
 


He may spread them, but he doesn't create them, he just repeats what he hears as BS as that might be, to me he's like a coin 50/50, with the 'even a broken clock is right twice a day' level of reliability.

However to think he's an nV Fanboi is laughable, dude, do you remember how amped he was that the R500/520 chip was being called FUDO? I think if anything he would be an ATi Fanboi, but more importantly now that he props up his won site, I think he's more of a controversy / page hits fanboi, promoting what he thinks will stir the pot and get him hits / links / diggs.
 


It's looking to be about that kind of situation, where the G300 is awesome on paper and have the potential to beat the HD5870/5870x2 but yields could cause the, to lower clocks (like the R600) and thus cut into alot of that performance, but still have this big die with all this silicon, expensive memory interface (if the 512bit is still correct) and a ton of other 'loadstone HD2900 issues'.

The other R600-esque feature of all this is that the opposition has brought out a good working card that not only plays fantastic in old games, but they also are the ones introducing and soon flooding the market with DX11/ComputeShader+ hardware for people and devs to get their hands on, thus influencing the far into the future path of development. Heck it's even worse, since DX10 mid-range cards took about 9 months to hit the market, it'll be maybe 2 months this time for mid-range and maybe 6 months for entry level, and dependeing on the nV delays that may mean top to bottom for the competition before they have anything in market.

That's arguably worse than the HD2900 situation, but the results may not be as bad, since nV PR & marketing is still better than AMD difunctional ecuse for marketing.
 
Yeah I have the same dog in this race I had before, nV said no mobile part until mid to late 2010 at earliest, ATi is the only near term solutoin for me, so I'm also somewhat of an observer on this because as long as they don't go backwards compared to HD4K, I'll likely get and HD5K laptop early next year.... hopefully. *crossing fingers*
 
Off your guy's topic, sorry.
I'm curious and retarded at the same time. But, I currently game with (2) GTX 8800's in SLI with a monitor of 1650 X 1050 Native Resolution. Will the new ATI 5XXX card be a upgrade on my 790i ultra board? Would it work? I have sli. haha...
 

yes it would work and it would be a pretty major upgrade, you just could never crossfire them since you have sli.
 

I think 5870 X2 TDP was created by him, because I havent heard or seen such nonsense anywhere else. BS about GX2 and similar may well be by nVidia PR.


That was a long time ago, Charlie wasnt pissed about nVidia sometime ago either 😉 And furthermore I dont think Fuad is fanboy, IMO he is in the program "The Way To Be Payed" 😛
 
I really wish I had written my prediction earlier. Due to Nvidia's rumored problems, I was going to predict that they will compete on their superior performance and CUDA abilities. I wake up to read this article.

http://www.tomshardware.com/news/Nvidia-GPGPU-GPU-DirectX-ATI,8687.html

By playing down DX11, I have every faith the new cards will support DX11, their midrange parts will be the GTX280/260 as "new" cards. I don't know whats going on with nvidia, but I'm starting to be very concerned.
 
As I said in another thread, only reason I ain't getting one is I can't afford it due to silly notebook prices. Going to pick one up with a 3650 in it. 4650 would have been nice and may come with the new intel chips but I don't need it.

Just FYI the MSIs are pretty reasonably priced and have a good selection of mobile HD46xx and HD48xx mobile solutions. That's what I'd get if I weren't waiting for the generation refresh.

Of course will they bring out a mobile part with 3 display ports :lol:[/quotemsg]

Now that would be sweet, but heck I'd just be happy with a single DP and HDMI, and elated with 1 DP, 1 DB-15 and one HDMI. Anything more and I don't think my heart could handle it. 😍
 



I expect the GT300 to beat the hd5870 by quite a bit.

It will use MIMD units rather than SIMD, unlike ATI's card. Potentially boosting performance QUITE a bit.

PhysX - Many people think this is FAIL but it can REALLY make a environments look alot more realistic. This is the next big thing for Games in terms of 'realism'....why WOULDN'T devs use this technology!?

Would this cGPU tech remove CPU bottlenecks by removing load from the CPU? Im not sure how this works but to me that seems a possibility if the app is coded for it???

Dont forget not only are they Doubling the stream processers etc... but using MIMD aswell....so surely BETTER than JUST double performance??

If NV cannot produce GX2 cards for whatever reason....i believe NV would not be stupid enough to just accept to be beaten...is this the reason a whole new architecture is in place??If so I have a feeling that NV are going to have a single GPU card to compete with the X2 versions from ATI.

 


MIMD will surely improve computation in cuda applications over SIMD. It should not make much, if any, difference in graphics applications. I greatly question your use of the word 'quite'...

PhysX is all well and good but it is not the gpu that makes the environments look better, it is the API... Which in a perfect world would be allowed to run on any hardware you wish.. I'd have to hope that opencl and directcompute will allow hardware physics to actually take off.

You don't seem to understand how long it takes to create a GPU (or any computer part). Nvidia would not have been stupid to take second place.. if it does happen there is little they can do about it. When they designed the upcoming series the 3000 would have been the absolute best ati had in the works.. They deign what they can, try to be the best, and if it doesn't work out that is tough luck and the nature of the business.. even if they knew exactly what the 5000 series was going to be last year it would have been far too late to change anything about the chip except to try to finish them quicker and bin them to a faster clock rate. There is absolutely nothing they can do if ATI is able to greatly increase the performance of their cards.. all they can do is increase the performance of their own as much as they feel is required and hope their R&D doesn't give them an hd2900.

But please, continue to talk out your ass all you want.
 
MIMD is much better for gpgpu functions et al than for just plain gaming, where SIMD does fine.
I suppose in the future itll make more of a difference, just not as much now, and its truly focused for the gpgpu scenarios.
If they dont find their gpus doing fantastic while costing a ton in die real estate for those gpgpu options, itll spell bad news for them.
They havnt implemented a tesselation unit before, nor had the shaders completely readied, tho that may not make as much difference, depending on a few things.
So, just doing DX11 is going to cost them die real estate as well, besides their gpgpu functionality, all of which nVidia users will pay for, and in order to be price competitive, so will nVidia as their margins wont be the same
 


There is so much wrong here I don't know where to start.


Dont forget not only are they Doubling the stream processers etc... but using MIMD aswell....so surely BETTER than JUST double performance??

Doubling shaders, memory and clock speeds don't mean you have a better card. On paper most of the 4800 card should have kicked the GTX200s asses in. It is about how you use what you have and the architecture of the card. Also note that ATI deploved many of the key features in DX11 and that gives them the edge in that area.

If NV cannot produce GX2 cards for whatever reason....i believe NV would not be stupid enough to just accept to be beaten...is this the reason a whole new architecture is in place??If so I have a feeling that NV are going to have a single GPU card to compete with the X2 versions from ATI.

They might just do that, who knows really. But no SOLID information has come out about the cards, only l links to random sites. However ATI is coming out first with guns blazing and they will have a large lead over Nvidia in sales. Many people will buy ATI cards either themselves or in OEM computers like Dell and HP. Even if it is out by the end of the year ATI will have a hold, as well as that means Nvidia will have to not only realase a card that can beat the 5870x2 right away. No waiting around to release a GTX395 like they normally would, they will have to bring it out at once.
 
Think about how long people have been talking about it. It's been months and months that people on forums, 'in the know' etc have been telling us that ATI were gonna be first.

It's not just first though, it's last as well. Evergreen is ATI's Alpha, and Nvidia's Omega.

It is over before any dice were thrown. Nvidia could not even make it to the table.

What you will see now is a Via/Matrox-like slide into obscurity from Nvidia. In the end, they just plain lost to a better company with smarter engineers.
 
Well, as nvidia's latest statement says, they aren't focusing their GPU dev't for gaming performance. Well, to whom they are planning to sell their GPUs then? Gaming has been the main driving force for GPU sales and dev't for years and now, they plan to shift it outside by their GPGPU stuff.
 
The problem with physx is that just like any new technology, it takes effort to implement. And now with NVidia behind for a potentially long period of time, they have even less leverage. Even back when they were dominating ATI they could barely get physx going. Unfortunately, especially with the economy the way it is, most game makers will be focused on saving money rather than utmost realism, so until there is one clear choice for physics simulation, I doubt we will really see its implementation take off.
 
I wonder what do these guys think off? First, they have been ditched by Intel in their chipset production for i7. Now they plan to walk a different path other than gaming performance devt. They're taking some risks hoping that gamers would suddenly shift their paradigm from gaming performance to physx, cuda, etc. Well, good luck on their endeavors. If dx11 cards from ATI turned out to be kick ass solutions then I don't know where they would put their asses in the market.