Nvidia vs ATI 2010

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Yea i should have never bought my first case without measuring it first, i just gauged it by eye and ended up costing me a second case lol. Wouldnt take the first one back. I hope someone learns from my mistake, measure first lol.
 


I'd call 40% increase in performance rather large myself. Of course it's hard to find something to push either of them unless you are using a 30" monitor or eyefinity.
 


I didn't watch the video. I'm not sure what you are on about thusly, as I'm not sure what I was supposedly 'duped' into thinking.. I never commented on the comparison to the 5870 or 5970.. just that random crap on youtube doesn't usually mean a thing.. Obviously the 5870 will be slower than the new fermi based cards, or else something went VERY wrong.

MY issue is with the troll and rumour mongering.. not with the cards in question. Besides that, we can't be sure about the performance of fermi until we have some reviews.. But I'm sure a bit faster than 5870 is a given. But a rumour is just a rumour without specs of the testing.
 
ATI did not own 2009..... they owned the last quarter of it. nVidia dominated them for the first three quarters. And because of the TMSC dabocle, ATI's lead they could've gained from the 5800 series cards is not nearly as huge as everyone thinks it is. Also the fact that there has been no groundbreaking OpenCL or DX11 apps, ATI's win in that last quarter was actually quite dulled.

If the Fermi flops, which I doubt it will because of nVidia's fanbase, but if it does, THEN ATI will take a commanding lead in 2010 and nVidia will have to play a bit of catchup. The thing is, nVidia has been making money hand over foot for a while now, while AMD has only been losing money, especially with the purchase of ATI going on their bankroll, and nVidia is over 5 times the size asset/monetary wise that AMD is, so even after the huge blow they may take at the beginning of this year, I still don't think it will matter as they can just pour money into the problem to solve it if they have to...

Also, to the guy saying that nVidia tried to kill LucidHydra, nVidia and ATI already both said they welcome the idea. Hydra got delayed because of their own technical problems, accomplishing what they are trying to accomplish with that chip is bloody hard.

Blocking PhysX with ATI cards installed was a pretty douche move though, that kinda pissed off a lot of people and might also contribute to a loss in nvidia's market base this year.
 


nVidia
http://www.nasdaq.com/asp/quotes_reports.asp?symbol=NVDA&symbol=AMD&selected=NVDA

AMD
http://www.nasdaq.com/asp/quotes_reports.asp?symbol=NVDA&symbol=AMD&selected=AMD

nVidia has a mkt cap of $10 billion. AMD, $6 Billion. I'm not sure how to measure asset/monetary value, but the stock market values AMD at 60% of the value of nVidia.

nVidia has reported an earnings per share loss for the last 4 quarters. Their revenues are growing nicely though. AMD is losing more money, but with the $1.25 billion from Intel, they'll report a huge 4th qtr profit.

nVidia has much less debt than AMD.
 
I wouldn't say either company owned 2009, though. ATi competed very well on Price/Performance, their performance was pretty darn good all around, and nVidia had the edge on performance, and probably features, too.

With the HD 5800's, ATi takes performance, price/performance, and features (DX11 & eyeFin.). Coming in to 2010, ATi has a nice lead. Who "wins" 2010 depends entirely on how/when Fermi comes out, and if ATi has a refresh/new generation still this year.

Oh, and here's hoping that Fermi doesn't flop.
 
As much as I prefer ATI, I very sincerely hope that Fermi doesn't flop. Quality Nvidia products push the ATI pricing strategies to a very consumer-friendly level. When both companies are on their game, we get to game cheaper. Works for me.
 
I would go ATI 100% as of late, and its funny to say that because 3 years ago when I was building my comp in the 8800 GTX days It was a EAAAASY pick to go Nvidia
 
ATIs marketshare has increased alot, and will continue to, as theyve already sold 2 millipn 5xxx series chips, with more moving each day, and regardless of perf, thats whats important to ATI.
For a short period early in the year and starting from late Sep/early Oct, theyve held the lead and will continue to til March in perf, and possibly beyond that. From August 08 til Januarary 09, from Sept til March, its a trade off lately, and have brought lower pricing, a win win for us and ATI.

Rumors showing Fermi using alot of power, and not taking the crown found here:
http://translate.google.be/translate?hl=fr&sl=fr&tl=en&u=http%3A%2F%2Fwww.pcworld.fr%2F2010%2F01%2F08%2Fmateriel%2Fcarte-graphique%2Fces-2010-nvidia-fermi-dissipation-thermique-tres-importante%2F468101%2F
 
Interesting doomsday article.... the Fermi's power consumption is 225 watts as nVidia has stated. It has the capacity of up to 300watts with the 6+8pin configuration so that it has room for overclocking......

It is less than the power consumption of a 5970 which has a regular 294 watt max, so why wasn't everyone running for the hills when we heard the 5970 could max out at 400watts with an 8+8pin connection for overclocking?? The 5970 has been said to be great with heat dissipation, so why would the Fermi single-GPU cards be any worse?

Heat dissipation is directly proportional to the amount of power used, and considering the GF100 is going to use 70-100watts less than the 5970, I really don't see a heat problem. Now make a dual-GPU GF100 card... and you might have an interesting heat situation 😛
 


In the article they said Fermi used 300W, 2 x 8-pins, the 5970 is a dual GPU card, it's like comparing the 5870 to a GTX 295.
You wonder why the 5870 uses less power, however 1 Fermi uses as much as 2 Cypress.
How long will it take to scale Fermi to replace GTS 250's? 9800GT? GT 240?
nVidia will take significantly longer, perhaps a year or more to touch the budget segment.
 
"It is less than the power consumption of a 5970 which has a regular 294 watt max, so why wasn't everyone running for the hills when we heard the 5970 could max out at 400watts with an 8+8pin connection for overclocking??"
First of all, this is rumored, as I said, and secondly, its hinting at needing specail cases and cooling for duo cards, like the 5970 is, but without the special cooling, tho the cases do require certain dimensions.
I think the main point here is, its not any more power efficient than ATIs solution, and possibly less so, considering the dual core loss of scaling vs single chip.
 
Seems like all the rumours (apart from those obvious fake graphs) are becoming true.
Expensive, hot, noisy, power hungry and not really much faster.
 


Fermi uses 225W, 8+6pin, it CAN go up to 300W if you wish to overvolt the card and OC it hardcore. A 5970 uses 294W max 8+8pinwithout overclocking, and 400W max if you overvolt it to OC it. So no, 1 Fermi doesn't use anywhere near 2 Cypress chips in power. It's definitely more than 1 Cypress, but considering it's size and estimated performance by some, it is just as efficient as the Cypress likely.



Ahh, fair enough. Thought you were pulling another "the sky is falling" trip like some people on here are.

As for power efficiency, we don't know if it is better or worse than ATI's solutions, it could be better, could be worse. Saying it uses more power so it must be worse is a horrible misnomer. The chip is bigger, so it obviously uses more power to give more performance. If nVidia makes the chip very power efficient, it could give off practically no heat at all (obviously it wont be that efficient, but theoretically it could be, the cards are bound to have plenty of bloody real estate to make it so 😛). I'm guessing it will be on par with the efficiency of the 5970 though, but that would mean it is unlikely to require special cooling for tri-SLI.... quad-SLI probably would need some though. If the Fermi is far more efficient, which given nVidia's track record I doubt, then you might be able to pull off quad-SLI with no special cooling, but again I doubt that seriously.



Except we don't know if any of them are true...... any info on that right now is just as fake as those graphs, aside from the power usage. But we already knew it would use a lot of power for such a huge chip, how EFFICIENT it is with that power is what matters... not how much it uses. Expensive... probably, you pay a premium for performance, and it probably WILL perform because that is how nVidia builds cards, and if it doesn't then nVidia would be stupid to release anything at all. Hot and noisy, you have no idea and neither do I, nor does anyone else on these forums or the internet, so stop spreading rumours >.<
 
Interesting doomsday article.... the Fermi's power consumption is 225 watts as nVidia has stated. It has the capacity of up to 300watts with the 6+8pin configuration so that it has room for overclocking......

225 as Nvidia has stated? Where? By my math it could max out at 300W as the article claimed. Your 225W figure is off. (I know where its off if you need the help.)

It is less than the power consumption of a 5970 which has a regular 294 watt max, so why wasn't everyone running for the hills when we heard the 5970 could max out at 400watts with an 8+8pin connection for overclocking?? The 5970 has been said to be great with heat dissipation, so why would the Fermi single-GPU cards be any worse?

Hmm, lets see. First, the 5970 is a Dual GPU card while the G300/GF100/Fermi (the damn thing isn't out yet and its already been renamed???) is a single GPU card. If the speculation article is right and that it can beat the 5870 but not the 5970, they won't be able to shrink it down. Why? Because it uses more then 225W for starters. Second, we aren't running for the hills because the 5970 is as high as this generation goes. Why worry about it, AMD isn't going to be releasing anything faster at this time. Nvidia at this point seems once again not able to do a "x2" card. Many said the same thing with the GTX2xx series so this might not be true. Like AMD and last gen, they could use lower cards.

Heat dissipation is directly proportional to the amount of power used, and considering the GF100 is going to use 70-100watts less than the 5970, I really don't see a heat problem...If nVidia makes the chip very power efficient, it could give off practically no heat at all

Wait, which is it? Heat is based of the technology of the chip, among other things. The efficiency of the design will also play a part. As I've already stated before, max power consumption is more the 225, your math was off. Single high end will be 300W, or close to it like the 5970. The big question now is will the GTX380 (assuming that name is still in play) be faster then the 5970?
 
1 caveat tho, the Fermi as listed has 448 shaders, and lower clocks, by what nargins on the clocks, we can only guess at. So the 225 watt as listed isnt enough infos at this time, and if thats power usage or TDP, cant remember now, if TDP only, then there wiggle room for a full GF300/512 shaders, if not, itll be much other rumors of it being like the 280s, and which I said awhiles ago, hot with poor ocing.
This is all specualtion, so claiming a certainty isnt what Im doing here, but several things points towards problems of this being a reality on the bad side, nVidia claims of its repeated coming soon, which couild also be just to keep in thegame, and prevent other purchases, which one way or another works for them, or, since theres been a slow dev and release, theres problems, be it heat, clocks, bugs or all the above, it doesnt bode well
 
My figures are not off, google it for yourself, it's 225 Watts, they barely kept it under that... barely.... they already stated they only added the ability with the extra connection to go up to 300W for OCing so people had room to play with.

It runs max 225 NORMALLY so if you didn't have the extra power connection for up to 300W you wouldn't be able to overvolt/OC worth a crap.

I agree with ya jaydee though, like I said, I figure it wont be that efficient, although I'm hoping it will at least match the efficiency of the 5970 or get close to it, otherwise they might not sell, ATI wont drop their prices, and that's bad for everyone. That and if they do end up offering more performance than the 5800's, but I can't tri-SLI them cause of heat, I'm going to be annoyed..... and ATI will prob get my money.
 
But again, if its less in power and perf to the 5970, even if it comes close to it, if you include the multi gpu scaling losses into the overall perf, its not good, and will get labeled as such, a hot and power hungry card
 
nVidia lowered the shader cores remember, maybe just because it was power hungry, so maybe it is more efficient and less hot now and will still perform well.. who knows, speculating just seems trivial at this point.

Although I guess everyone on here is doing just what nVidia wants... discussing their card lol, for good or bad.
 
My figures are not off, google it for yourself, it's 225 Watts

If its so easy to find can't you just link it? Remember that you claimed Nvidia said this was the case, so a link from Nvidia.com would be awesome. I'm not trying to be mean/rude, but I think we both know that either the power figures aren't known, or they are higher then 225W.
 


they had to downclock considerably because of heat issues. the card would throttle itself down (powerplay) when it reaches 100c. so you lending a blind eye on and keep on pointing at it as an ATX issue...... fan BOY.

 
i dont see how ive made a fanboy statement.

its a fact, that any card that draws more than 300w is considered non standard ATX, and must be branded accordingly. they would lose sales if this where the case, hence the downclocking.

the fact is, every single (professional) review of the 5970 has overvolted using the supplied ATI volt tool, and reached 5870 speeds (850/1200) without heat causing throttling.

if you are trying to somehow imply that the 5970 does not hugely out perform the 295, then it is surely not I being a fanboy.
 
There is a heat issue with the 5970, I think I was reading about it over at hard. Both the GPUs were ok, but one of the VRMs would hit 100C and you'd have issues. They couldn't get it to happen in games, but it would happen when they ran a program who's name escapes me.

AFAIK, the reason the clocks are lower is to make sure it stays under 300W at stock. The 5970 is a 294W card, so if the clocks were higher they would be above 300W. By lowering the clocks they stay in ATX specs.
 
Status
Not open for further replies.