Nvidia vs ATI 2010

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


games like this have already been attempted, and eventually this is the sort of thing that will be the norm.

but as i said before, it WON'T be under Physx, its much more effective to do it on the CPU. then everyone can use it, and you aren't limited to only the tools nvidia provide.

red faction guerrilla is a perfect example of that, screw looking for a key, just sledgehammer the door (or surrounding walls).
deus ex:invisible war was the first game that let us pick up anything we could loft to throw it at our enemies. and that now an old game.

eventually, destructable environs, and full real-world physics will be the standards in almost every game. it will be havok mkaing money form that, not nvidia.
 


Cypress is almost 27% larger than rv770.

334 mm² against 263 mm². It is a big chip by ATI standards, but still manageable, and it scales down to the juniper by simply cutting it in half - a 5770 is half a 5870.

ATI have a huge lead in production capability and Nvidia will never close the gap with their current business model.
 
Cypress is almost 27% larger than rv770.

334 mm² against 263 mm². It is a big chip by ATI standards, but still manageable, and it scales down to the juniper by simply cutting it in half - a 5770 is half a 5870.

ATI have a huge lead in production capability and Nvidia will never close the gap with their current business model.

I don't know about never. We don't really know a lot about Fermi's performance. It's quite possible that it's design will revolutionize GPU computing as they are hoping. While it might have hurt them some in the short term, it's possible it'll leap them way ahead in the future.

All we can do is wait and see. They are taking a large risk, but sometimes risks come with big rewards.
 
I see Fermi needing to be at least 20% faster than the 5870, and only 450$ US.
My reasons are, add 5% perf from driver improvements by the time its here, overall, as certain games will get decent bumps while others small bumps.
The 5890, which should be a 10% faster card, plus the driver improvements, making it 15% faster over all than where were at now with the 5870.
While Fermi too will make nice drivers improvements, people dont usually buy their cards looking for this, but the perf as is now.
If the 5890 comes in at 400$, then Fermi cant go higher than 450$ IMHO
 
I can see Fermi not being a cost effective card for gaming when it's first released, yet still be a success. It may suck for gaming at first, be a great success for workstations but pave the way to more powerful gaming cards.

That would still be successful, just not in the short term.

It's also possible the whole concept will go up in flames and not be worth much at all. Time will tell.
 
Fermi will need to be faster than the 5870 by default; Nvidia already has real performance numbers for their competitor's product and Fermi will be more expensive on the account of (1) it being a massive chip and (2) low beginning yields. If they release a product that's more expensive than what ATI offers and competes equally in performance, nobody but fanboys (and perhaps some noobs) will buy them. ATI will, as usual, lower prices to compete. So, as long as the top single card (and subsequent cards) are faster than their ATI counterparts, even if they're more expensive (to an extent), Nvidia will do about as well as they did with the 200 series.

If Fermi competes equally with ATI's 5k series, I'll be incredibly surprised.

Any advantage ATI gains from having DX11 hardware out faster from program optimization on the part of developers will be very interesting to see in the future.
 
all i care is that if a flagship card comes out later then another card it should be better so it pushes down prices! so go nvidia! then go ati then go nvidia then go ati etc etc.
 
Let's hope nVidia loses ground until the marketshare for discrete GPU's is about 50-50, then they make an comeback with another 8800, then ATI does the whole 3800/4800 cycle again.
If one of the graphics makers dies, it's be an monopoly.
 


lol .

exactly.
 


Here's the latest #s from the TechReport/B3D;

http://techreport.com/articles.x/17747/2

and based on the previous #s the Cyrpress would be roughly the same as the G92 and the 4770 is about half the size of the G92b and the HD5770/Juniper is just a bit bigger than the 4770, and much smaller than the G92b, so a die shrink isn't going to be enough.

But remember supposedly Fermi is very scalable so it shouldn't be no more of a problem than getting Fermi to market for November.... 😉
 


Heaven forbid there was a monopoly, I hope it would be ATi standing. Nvidia has demonstrated the most greedy business model. They've charged the most money for a single GPU solution, and it's been them this entire time pushing for reasons why consumers need more than one card. Think about it, they came up with SLi so people who have to have the best will undoubtedly purchase not 1, but 2 or 3 $500 cards. They make a killing off of it and it was never an essential technology. Ati mached it with crossfire just to stay competitive. Now they want you to buy multiple cards for hardware accelerated physics. Absolutely unacceptable. The day consumers must spend another $300 on a seperate piece of hardware just to accelerate physics will be a dark one indeed, especially when the CPU is more than capable of taking on the task. With PhysX, and all this hype of fermi taking over the CPU, constant bashing of intel (deserved or not), deceiving consumers, cheating in synthetic benchmarks, and paying videogame developers to make a lesser experience for Ati cards. All of that makes Nvidia having too large a share of the market a terrifying prospect.
 
The day consumers must spend another $300 on a seperate piece of hardware just to accelerate physics will be a dark one indeed, especially when the CPU is more than capable of taking on the task

exactly what i was saying earlier. this is one of the mian things i dislike about nvidia. they create gimmicks as a way of getting money out of people. they obviously knwo that physics can be calculated cheaper and easier on a CPU, yes they still try and get people buying a second card for dedicated PhyX acceleration
 


No, you can still add reasons that breaking all doors is counter-productive;

- Limited ammo capable of breaking door (greanades, high calibre rounds)
- Time considerations (20 blows of an axe, 50 kicks of the foot, etc )
- Stealth/Noise of using or not using a key.

Oblivion has keys and lock picks and spells, it doesn't have to be a simply cut & dry situation , it can easily have more complex considerations.
 


I think in many ways you are really not open minded, but I do have a couple greavances with Nvidia as well, but not for coming up with new technology.

SLI does not force you to buy two expensive cards, only those with a ton of money will do that, and Nvidia gives them something to spend it on. However, SLI does save money on upgrades and in many cases, two cheap cards in SLI out perform a single high end card that costs the same. These are good things, and you don't have to buy into it if you don't want to.

PhysX is another good thing, and a bad thing with the way they handle it. PhysX is a great way to utilize the power of GPU's, which are progressing faster than CPU's are. They are much better number crunchers than CPU's and it's a great idea to make use of it. What I don't like is they disable their PhysX support if you put a competitors GPU along side it. There is no reason to do that other than to punish others if they don't buy their product (good thing there is work arounds). I also believe they have prevented ATI from writing drivers to do the same, as there was some articals a while back with someone who got PhysX to work on ATI cards, only to be hired by Nvidia later and we never heard from it again.

We all cry about how they over charge, but do we really know that they are taking advantage of us? They may be barely surviving, are they required to lose money to meet a competitors price? They have business expenses to think about. If ATI can produce similar performing chips at a lower cost, that doesn't require Nvidia to lose money to try and match it.
 
2010 may well be ATI's year. Fermi is something like the Loch Ness monster! Don't know when it be out. What will the specification be? What will be the prices? We can definitely give the first two quarters to ATI coz i don't thing Fermi will be out by then.

If the prices can be reduced to optimize profit then i think nVidia should do it. Like ATI does. Most of their card are a bit overpriced in every segment. People can accept one or may be two segment to be highly priced but not all. nVidia should also look at the mid range segment rather than making giant monolithic cards!
Keep your fingers crossed till Fermi comes out.
 


i tihnk the 5870 has already proved to be this generations 8800. its performance leap is absolutely huge compared to last gen. a single GPU solution outperforming the high-end dual GPU solution form last gen is pretty phenominal.

this is why i dont think nvidia will strike back this gen, they will most likely be catching up to ATI all year. but i imagine they will be first out with the next gen cards, and probably take the lead again.

if they do manage to come up with something that outperforms the 5870 i will be very impressed. as it would mean an even bigger performance jump from the last gen, and would surely beat down the legendary 8800 performance leap.

i'd like to see it, but i dont expect it.
 


Actually, the 5800 was pretty sad, they missed the mm² die size by a lot, the 5850/5870 was supposed to be $199/$299 just like the 4850/4870.
The 4800 had 800 shaders vs 320 in the 3800. They didn't even double the die size.
3800 -> 4800 = 55nm -> 55nm
4800 -> 5800 = 55nm -> 40nm
They put 1600 SP's in there, but the die size more than doubled! I'll give them 30% of the die size for tessellators/DX11 hardware (the 4800/3800 had a tessellator too) but that's still the 89% they gained from moving fab process.

The 4870 killed the 3870X2 more than the 5870 did to the 4870X2.
 
granted, the 38 series was during my 'downtime' from pc hardware, but it seems hard to imagine such a performance leap.

in the times before, you would expect a new GPU generation to offer a considerable improvement to the gen preceding it. but certianly not doubling the high end single gpu performance.

have i missed so much in the last few years that the 5870 is just an expected performance leap?

everyone i have talked to about it, seem just as blown away by its performance.

 

Whered you get that the 58 series was to be the same price as the 48 series? It sure wasnt from ATI
wj9ru9.png
 
Nvidia is trying hard to kill physX. If Nvidia didn't force people to circumvent their code that prevents ATI and PhysX from working together, a lot more people might adopt it. When a programmer made an ATI card do PhysX, Nvidia hired him so it didn't get out.

If Nvidia wants PhysX to become mainstream, they have to allow everyone to use it.

Anyways, I think you guys misunderstand what PhysX is. It's not code to make explossions bigger. It's a code to do physX coding on the graphics card, rather than on the CPU which is generally slower than a graphics card can do.

Hopefully DX11 will get a unified version of code that can do similar things as PhysX so Dev's will write the same code for both card companies.
 
Status
Not open for further replies.