The GeForce GTX 770 Review: Calling In A Hit On Radeon HD 7970?

Page 6 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
This is a really clever pricing from nVidia.

The recommended msrp is $400; for nVidia (and probably its partners) knows that prices will creep up to a more financially viable spot, whereas the Radeons will slowly and slightly slide to a lower price point.

But initially, sales will soar and word will spread fast due its relatively low price.
Very clever, as I see it.

Case in point: Average Newegg prices of GTX 680 vs. 7970GE vs. 7970

680 Price at Launch: $500
680 current price: ~$517 (prices crept up over time)

7970GE at launch: $500 (by which time 680 had crept up to $520)
7970GE current: ~$480 (prices crept down over time)

7970 launch: $550
7970 current: $409 (prices crashed over time)
 
This has to be by far the most disappointing review I have read on this website. It shows a complete lack of journalistic integrity. Someone please tell me how this review doesn't show a degree of bias? How can the reviewer call this card an smart buy when compared against the 7970ghz? There is absolutely no reason to buy this card at such a price point. Does NVIDIA give "donations" to tech websites these days in order to get favorable reviews?

This card offers a very poor value to performance ratio when compared against the 7970.
- It has a 256 bit bus when compared against 384 from the 7970, it also has only 2gb of vram against 3gb of vram.
- The 7970 includes 3 or 4 AAA games in its gaming bundle and this one doesn't.
- Low memory and bandwidth means that it won't do well at resolutions higher than 1080p.
- The 7970 has much higher performance on non gaming applications.

I don't doubt this card will give you excellent 1080p performance and it might be just a bit faster in some games. But asking $450 for this card against 7970ghz which can sometimes be found for $400 is just ridiculous. To make matters even worse you can make a little money by mining bitcoins on the 7970!!! so the card almost pays for itself. (It is too bad that bitcoin mining with GPUs will soon become a thing of the past due to ASICs, but you can still make at a couple of hundred bucks until the end of this summer when the mining difficulty will skyrocket.)

To summarize: The 7970 does better at higher resolutions. It has higher compute performance on professional apps. It pays for itself with bitcoin and litecoin mining (Litecoin mining is still GPU viable and will be for some time). The 7970 ghz edition costs $400. The 7970 comes with 3 or 4 AAA tittles which cost well over a $100 dollars so in reality you end up paying like $300 or less for the card itself.

Need I go on?...

 


Please tell me how much FPS and real world improvement you have gotten with that OC?
GPU Boost 1.0 must love you...
 


You sir... Are an obvious troll. And if not, uhm... Figure it out...
 


How is this any different from AMD overclocking their cards to ghz editions? Which by the way run $50 more (nullifying the free games even if you want them), have worse driver issues, more heat, more noise, get beat in almost every game etc. I don't count bitcoins or f@h or opengl benchmarks NOT based on actual products... Why not benchmark Adobe with Cuda vs. whatever AMD can do (opengl?)? Many home users have adobe products. Most of us don't buy autocad/solidworks etc at home or at least not without understanding you need a quadro/tesla/firegl etc to run them to make money. Who buys a $2000-5000 app for work but pairs it with a $400 card?
http://cad-software.findthebest.com/compare/5-24/AutoCAD-2013-vs-SolidWorks-Premium-2012
Solidworks/Autocad for $4000...Ok, raise your hand if you put anything but a PRO card with a purchase like this? You're SERIOUS about work to fork over $4000! You buy a TESLA etc if only for ECC and driver support that is REAL.

I don't know anyone even doing bitcoin/f@h (both run up your Electric bill BTW), bots do bitcoins now and blow away cards. These 770's will sell out for a while just like the 780's and titans and for good reason. At 1920x1200 or 1080p where 98% of us run (according to steampowered.com surveys), these cards rock vs. AMD.
http://folding.stanford.edu/
F@H is only installed on 287,000 pcs. How many of those are HOME users? There are 350million PC's sold each year. Why is this even benchmarked? Waste of time. Do the math even if those are ALL home users, that's .00082 % of pc's sold each year...ROFLMAO. That's really important isn't it? It would take a good 15x more installs to even get to 1% of PC sales...LOL. Nobody should ever quote this as a reason to buy a card. For the decimal point of pc users actually wasting their time doing this, will you get paid anything if you solve cancer while running up your electric bill?...LOL. NOPE. Who cares about most of these synthetic OPENCL benchmarks? Sandra earn you money? F@H earn you money? Is this how low you have to dig to find a opencl benchmark? YES. Because everything runs CUDA, OpenGL or DirectX. OpenCL isn't important yet and it remains to be seen if it EVER will be with no funding from a company like NV (AMD is broke).
"Let’s not mince words. If you need the fastest double-precision math available, the GeForce GTX 770 is not the card for you."
Let me know when you BENCHMARK something REAL to show this tomshardware. Almost everything you can do in opencl that makes money (ok, I'll say ALL THINGS) you can already do in CUDA/OpenGL or DirectX. Sandra & F@H mean nothing and predict NOTHING. They earn you NOTHING correct? So the statement you made is misleading at best, ridiculously naive at worst (or is that the other way around...LOL).

Also note the benchmarks should be better for NV. Toms has this practice of NOT benchmarking the CARDS as they ARRIVE (for both sides)! NO OOBE here, they run reference. They should include the cards they get in the benchmarks as who the heck buys a ref card? Ridiculous. I'm wondering why anyone EVER ships tomshardware a card to be benchmarked? They immediately downclock them thus no point in shipping a card to them for benchmarking. You can take Toms benchmarks and add 5-10% to them all as you'd buy a card like the 3 they show.

http://www.techspot.com/review/678-gainward-geforce-gtx-770/page3.html
This is how they should be shown. The ref there in green, but the ACTUAL CARD YOU'LL BUY right there next to it in another green. You don't have to do ANY work to get faster perf when they sell them OC'd. The OOBE is OC'd. We should get that in Tomshardware reviews, not NEUTERED benchmarks. I would rather jump off a bridge than buy a ref clocked card when I can get another 10% on the clocks with WARRANTY.

For people still complaining about memory being 2GB, I'd remind you less than 2% of the market runs above 1920x1200 and NV cards win many above this res anyway (even though they usually end up not playable at under 30fps min).
http://store.steampowered.com/hwsurvey/
Go ahead, see for yourself. Why benchmark for under 2% of the user base? And those 2% according to that same link have TWO CARDS or more.

And once you up it above what 98% of us use (why toms test for 2% of the market seems silly and I have no idea what the point is), you get things like this:
http://www.techspot.com/review/678-gainward-geforce-gtx-770/page6.html
Less than 30fps AVG in hitman (so how stuttery are these with mins even worse?)...So for anyone who don't understand, you need 2 cards for these resolutions (or a dual gpu card). Sure there are a few games that can run there (not many maxed out totally) but it's usually a stutter affair. Even hardocp shows the same. They had to still turn details off at 1080p to keep above 30fps MINIMUM with the 780gtx.

http://www.techspot.com/review/678-gainward-geforce-gtx-770/page7.html
Tombraider shows the same at 2560x1440...avg right at mid 30's, so mins will stutter. I really wish everyone would report min and avg. Averaging in the 30's means totally unplayable to me when you consider how far they will drop to mins and how long they run there. I don't enjoy a CHOPPY experience when gaming no matter who's GPU is in there.
"This also meant that the GTX 770 consumed 10% less power than the Radeon HD 7970 GHz Edition while providing 7% more performance."
From the power page at techspot. Pretty much sums up why its a buy even at EQUAL pricing and it's priced roughly $50 less ($400 770 vs. 450 7970ghz).

The GTX680 started at $500, so you're getting this now at $400 (and it's FASTER than 680) which was the original price of the GTX 670. Anyone who doesn't get this math is just a fanboy.

Personally I say wait for 20nm from both, but if I was in the market for a $400 card the 770 the best for what it was intended (even tom's comes to this conclusion). It's a freaking GAMING card and in this regard 7970ghz RARELY wins at any res and loses some by large margins.

Techspot's conclusion:
"Compared to the Radeon HD 7970GHz Edition, which can be purchased for $430, the GTX 770 provided 7% more performance in our frames per second testing, while this figure was increased to 13% in our frame time testing. "

http://hardocp.com/article/2013/05/30/msi_geforce_gtx_770_lightning_video_card_review/#.UahtyldRj8c
$450 for the OC's MSI lightning card and is in stock at newegg. A 104mhz OC.
"The MSI GeForce GTX 770 Lightning video card does demand a $60 premium, but even at $459 it still competes with many GeForce GTX 680 models on price, but is 15% faster. Our tests today have shown that it provides a better gameplay experience compared to both the GTX 680 and HD 7970 GHz Edition. Therefore, this is a price premium that is worth it. It actually provides a gameplay experience improvement, and thus is justified."

I can provide a dozen of these statements from sites. Also note that there are overclocked cards at newegg under $419 that are fairly close already. Selling well already as half are already out of stock at newegg...LOL. I predict a great quarter for NV and slightly higher margins than last Q for any stock owners on here :)
 

Why would AMD suddenly go under when their share price is skyrocketing and they're set to earn billions from providing the APUs for the next-gen consoles? That sort of sky-is-falling thinking might have made sense late last year, but not now.


And now we have one from team green as well. Guess it's unavoidable.
 
somebodyspecial writes:
> For people still complaining about memory being 2GB, I'd remind you less than 2% of the market runs above 1920x1200 ...

That kind of statistic can be very misleading. I talked to one high street small computer shop
owner in CA who told me 45% of his income comes from high-value enthusiast items, because the
margins on cheaper parts are tiny. Indeed, he makes a loss on entry HDDs because it's important
to keep the customers coming in so they'll buy other things aswell.

People use statistics as a drunk uses a lamp post: for support rather than illumination. 😉

Just because the percentage of users who have a certain type of item is low, that doesn't
mean the revenue from said item isn't very important. Premium brands sell because there's
a market for them. Some always want the latest and best.

I agree with you about F@H and pro apps though.

Ian.

 
[citation][nom]dragonsqrrl[/nom]If it's a choice between just a 780 and 2 580's for $670, I would definitely go with the 780. ...[/citation]

I forgot to mention that I'm also doing CUDA research atm, so in the end I bought two 3GB 580s,
which cost 450 UKP total. Ahh, thou doth luxuriate in the warm glow of US pricing. ;D Here, the
cheapest 780 from the same seller would have cost over 100 UKP more.

NVIDIA could easily create a *real* compute card if they wanted to. All it needs is a Titan-type card
but with a 512bit bus. It's why the 580 still does so well for CUDA (Titan has more than 5X as
many cores, but is it 5X faster than a 580? Not by a long way, it doesn't have the mem bw needed).

Ian.

 
To possibly head off some of the fanboys, it might be good to remember a comment Cleeve made a while back, something like "There are no longer any bad cards, just bad prices."
Or, GTX680, GTX770, HD7970 [GHz]; whatever, you're not going to be suffering.
 




it won't be. haven't you been paying attention? the whole lineup is getting a boost in price of 15%-45%

Expect to see the 760 at about the same price as the 660ti (since it will basically be a 660ti that won't be too surprising anyway)
 


Does that not heppen ANYWAY with each new series release? LOL
 

Except maybe the Radeon HD 7990, if you're sensitive to microstuttering.
 


Wait. Are you saying I AM an Nvidia fanboy?
 

Only if you'll agree to call me an AMD fanboy. 😀

Nah I just meant "you" in general. Not you in particular. Like... "if one is sensitive to microstuttering".
 


Did not mean to reply on that one. LOL I was supposed to reply to the "And now we have one from the green team too" part.
 
The cooler is overbuilt for the 770, so extra overclocking is likely possible? Yeah, right. Because this card is in essence no different from a 680, only already overclocked, I'd say this headroom might not be as much as you'd think.
 
[citation][nom]mapesdhs[/nom]NVIDIA could easily create a *real* compute card if they wanted to. All it needs is a Titan-type cardbut with a 512bit bus. It's why the 580 still does so well for CUDA (Titan has more than 5X asmany cores, but is it 5X faster than a 580? Not by a long way, it doesn't have the mem bw needed).Ian.[/citation]
... that's not the way it works, at all. You can't compare Fermi cores 1:1 with Kepler cores, and it's not due to a significant lack of memory bandwidth on Kepler. Fermi uses a hot clock on its shader domain, which is double its core clock. This is one of the main reasons it could get away with using relatively few cores while providing good performance for its time. A drawback to this method is higher power consumption, and the further you scale up the less economical it becomes (diminishing returns). This is why 2 Kepler cores offer roughly the same performance as one Fermi core. Nvidia trades the shader hot clock for more cores in Kepler, and this is a good choice especially for compute performance.

Not that any of this really matters since you already have the cards (not sure why you even asked the question to begin with) but I still feel like a 780 would've been a good alternative. I don't have any CUDA specific examples to give, but at least on paper a 780 would provide more raw FP32 performance than 2 580's. And if you need FP64? I probably would've recommended trying to go for a Titan, Nvidia's first entry into the pro-sumer market. It's significantly more expensive, but you'd also get far more performance for your money.

And Nvidia has a separate brand devoted to *real* compute cards, it's called Tesla. You'll notice that memory bandwidth on these cards is often lower than their Geforce equivalent.
 


You clearly have no idea about business. Going into the red for a year doesn't bankrupt a company. You, much like those idiots going with Nvidia due to a false notion of imminent doom and also paying a premium for equivalent or inferior products(However, Nvidia does have a superior product when discussing multi-card setups), only propagate falsehoods without even realizing that your beliefs are completely biased and have not factual basis.

Take a look at AMDs balance sheets and figure out the reality of their financial standings.
 




Ah ok cool then... SInce I tend to go for the underdog... LOL...
 


More than you think:http://www.anandtech.com/show/6994/nvidia-geforce-gtx-770-review/17
 
Status
Not open for further replies.