GTX480 / GTX470 Reviews and Discussion

Page 33 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Alright, I'm just going to say this, this thread is insane. All the bias and misinformation is ridiculous.

I find this halarious though:
Thats a very low OC btw 140Mhz OC on a 470? A 5850 does 300Mhz OC.

He forgets to mention that almost every, reference or not, 5850 tops out at about 850 MHz on the core with stock voltage and the only way to get any higher is with a voltage increase. So to compare them stock voltage vs stock voltage, we are seeing a 140 MHz overclock on the core vs a 5850s 125-150 MHz on the core overclock (875 MHz is pushing it at stock, the highest I've seen is 910 MHz on a non-reference that couldn't change volts) but this is a LOW OVERCLOCK, I guess...

Then he compares the fabled Gigabyte over-volted card with a 1 GHz core, but forgets to mention that is will likely cost more than the GTX 470, and likely more than a reference 5870 because of its performance. This performance, should also equal a GTX 470's performance when overclocked on stock volts, because a GTX 470 is ~10% faster than a 5850 and the extra overclock on the Gigabyte card should give a ~10% performance boost at most.

This thread is full of stupid comments like these by people who don't take the information to its logical conclusions. All this because they don't "like" a product, but how the hell do they know if they don't try to get facts and make logical conclusions based on these facts to decide what they like or dislike.
 


You need to look up the prices of these cars. 😱

The most expensive Lamborghini is about $1.5 million, but its a prototype and isn't meant to be sold.

Even the ridiculous Bugatti Veyron 16.4 is a sub-$2 million car.

The most expensive car ever sold was a Ferrari 250 Testa Rosa for $12 million.

I know there is no good reason for you to know that, I just thought that I would share some useless knowledge. 😀
 
Hey i tried to see if anyone had anything about nvidia's 3 monitor solution cuz i haven't seen anything past announcement and little demo of it in a long time.

Also i wonder what are ppls thoughts about Adobe CS5 being large part of the features nvidia only acceleration probably though cuda or w.e but i mean that's a selling point to the 470 and 480
 


Isn't the horse-sized eWang when you get the latest stuff for free before anyone else?

Oh... BTW, I've got a giant mobile e-Wang ! :hello:

 


Yeah sadly enough, it makes me wanna delete the last page and a half.

Might just end up restarting the thread, but I'd hate suffering through the first two days of n00bz again, why do you think I never join these things on the first day. :pfff:

 


Supposedly a driver is coming out in April to coincide with the release of the cards that will have the 3D Vision drivers.
 
The thread has gone to hell all over the place. There are some very vocal people who know nothing of what they are talking about. Why is it that those types of people are always the most vocal?

It makes me oh so tired.
 


I can't speak regarding the multi-screen support because I only need two screens and could care less but the adobe cs5's mercury replay engine was the selling point for me grabbing a gtx480. To me, april 12th is a big day, not because of fermi availability or whatnot, but because it's the paper launch of adobe cs5 and I get a ton of new material to drool at. These are exciting times in computing because all of a sudden, the professional setups that were reserved for the industrial pros are suddenly affordable to ordinary folks like me, whose creative budget is the portion of my disposable income that isn't blown on beer and my director of finances is the wife...
Unfortunately, my copy of cs5 will only ship by may 22nd so there won't be any hands-on testimony from me. Though, if you wade through the vast volume of creative communities out there, it's impossible not to see the excitement.
 


And I'll bet you read all 17 pages to make sure that it was pure fanboyism and no discussion, right? You bash those of us that have participated in this thread, but what does your post contribute? Pissing and moaning puts you a lot higher than a company's fanboys, doesn't it?
 


But he has Liquid Cooling, So his E-Penis is larger than ours.

So we can't argue with him, he's automitically right.
 


Wait, uh? What makes you say that?
Have you seen what cuda enabled adobe premiere or after-effects can do on relatively cheap mainsteam workstations?
This is by far the greatest leap in performance for the masses I've seen during my lifetime and no consumer cpu or ram setup on a standard 6 slot configuration in this current generation could hope to come even close for that price or even double.
 


man you have no clue how big my E-peeper is if its based on water cooling.
i now have four radiators, four pumps and four water blocks each with there own loop....
on a 4-way sli board in a mountainmods case.

and no i did not read all 17 pages only 5.
i did like charts.

sorry for bashing everyone. it was only for most of them.
and i did add some good input about the new card.
it was how the card is hot but nvidia has stated that the card is built to run hot. and that this is true b/c the 90*c card
is the fastest card out.
but i took it out because my GF game home with a quiznos torpedo subs and i didnt want to look up the quote with a link.
but i can do that b/c i have a big e-penis.
 


You're E-Penis is large enough to pleasure an E-Horse.

The only person with a larger E-Penis I know would be the guy on Hard Fourm who has his i7-920 at 5.3ghz on L2N.
 


Two errors in your thinking;

A) the obvious, that it's not the fastest card out.

B) If it was built to run hot, why didn't it reach it's target clocks or it's target SPUs?

That it's hot (or power hungry) is a relative issue, a problem for some and simply a manageable factor to be dealt with by others. But there's no point in pretending it's 'designed that way', when it didn't reach it's intended design. :pfff:
 


I think there's way too many people misquoting that poor PR monkey's wording trainwreck...
The chip wasn't "designed to run hot", as in engineered to purposely to produce higher quantities of heat; that statement would make no sense since there's no engineer who would want a consumer grade computer chip to run hotter by design.
I'm pretty sure what he meant was that the chip was designed to "be able to take the heat", as in it will not have an impact on the longevity or reliability of the chip.
Nobody's contesting that the chip hasn't quite reached the marks it was aiming for but being a new architecture, who has the spotless track record to blame them?
 
It's not made to run hot lol that again would be stupid

it was made with the heat in considerations and with all things ******* up for them they concluded that the heat would not affect the performance or longevity of the product not more then their other products.

Still hot is hot although all i care about is; is it quiet and pretty much eh.
 
I think you guys are confusing things as being designed for when they are more properties of the process. What in nV's design would 'be done with a consideration for heat' something not done on the HD5870?

90c isn't particularly that hot, with the X800 and GF6800 both cresting that number under usage, and the GF7900 doing that without a problem until the voltage regulators failed (not the GPU). I just don't see anything specific to the design that shows it was made to handle heat any better worse.

The definitely seem to be able to handle that temperature right now, and their effect no one knows, which bring us to the next idea...

As for the longevity and reliability, that's why clocks are picked, at a point where heat produced, and heat dissipated by the HSF balance out with performance to provide a card that you can make and sell and profit from.
They are chosen so that chi distribution of failure rates makes the warranty a profitable venture, or puts the failures so far off it doesn't affect the majority of sales/revenues.

The reasons the clocks and SPUs are an issue is because it shows that whatever the original design (which is usually done outside of immediate process characteristics and final temperature/heat/power considerations unless a metal spin is done) that design was not achievable with the current chips that resulted, and at least the clocks would be related to heat/power.

No one has a spotless track record which is why I remind people about the very warm R600 with the problematic 80nm HS process, but we don't just ignore such things. At the time everyone knew the only hope to fix that was to change process, it's not like ignoring it made it go away. Same as them, some people don't care, others learn to deal with it, but it's still there.
 




Funny thing is even as of today, the GTX470/480 are still not supported nor have they been certified to do so either;

Supported NVIDIA graphics cards for GPU acceleration

* Quadro CX (Windows)
* Quadro FX 3800 (Windows)
* Quadro FX 4800 (Windows and Mac OS)
* Quadro FX 5800 (Windows)
* GeForce GTX 285 (Windows and Mac OS)

Visit the NVIDIA website for system requirements and compatibility. The list of graphics cards that are compatible with Adobe® Premiere® Pro CS5 is updated on a regular basis.


http://www.adobe.com/products/premiere/systemreqs/

It's support might not be coming until Q3 (maybe);
http://forums.adobe.com/message/2730495#2730495



Cuda enabled after-effects = nada.

http://forums.adobe.com/message/2728368#2728368

"After Effects CS5 doesn't take advantage of CUDA."

OpenGL acceleration is available to multiple platforms, just like in CS4. [:jaydeejohn:5]