The Real Nvidia GeForce GTX 970 Specifications

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


You need to cry foul otherwise this is viewed as acceptable, anything at a significant cost should have accurate specifications. It's not like this is going to make me stop buying Nvidia products in the future, since it doesn't really effect me since I usually buy the top cards which I would assume wouldn't have blundered specs like this. But as far as recommending hardware to other users it will effect me decision.
 
They make these chips, then go and disable parts of them so they can charge less for it all to fill a gap in the market. This practice gets on my nerves. I understand they will make lots of money doing it, just deliberately slowing down progress just to cash in just does not sit well with me.
 
Between the corners they cut on their cards, the proprietary standards and features they push, and they political shenanigans they pull with game developers, if it wasn't for the fact nVidia has competent software support, I would have jumped ship to AMD years ago.
 
I'm a supplier quality engineer at my company. I would be sending them a CAR (Corrective Action Request) and seeing what systematic changes they are going to put in place to prevent crap like this from happening again. Honestly I would be very frustrated and would want a firmware update to lock the card at 3.5GB to prevent any sort of slowdown.
 
I'm confused. How does Tomshardware go from the scientific method and objective reviews; to subjective viewpoints, such as pure speculation highly in favor of Nvidia, almost laughably so?
The chart is wrong and so is the analogy. A better analogy is theoretical top speed. Theoretically the vehicle could do 195mph but instead does 170mph when later tested. An unlikely scenario but none the less fails to deliver as advertised.

Been reading TH for years and haven't seen anything quite like this article. Concerns me.
 


Rather than buying 2 970s for sli, I would have bought 2 780ti which initially traded down to sub $400 before rebounding

 


You my friend should do some more reading into this. The product is flawed in what it is suppose to deleiver. Anyone that has this card and plays beyond 1080p are taking performance hits becasue of the memory issue. So, saying that this doesn't impact performance much shows me that you just skimmed through the articles all over the net and came right here to troll!!!
 


don't be, most review site have same article with almost the same way of saying ..
yeah, it's just "harmless miss communication", we believe in them. since "they gain nothing from it"
(hiding the real spec thing)

it's like some guide lines that has to be followed..
just read them, makes my head spin...

don't mind me it's just conspiracy theories.. 😀

By the way it doesn't maters.. what maters if when we get full result of benchmark,
not just avg fps, but full time variance, sli setup and not. big page file helps or not, what game that effected..
is any effect in gaming experience when it use the last vram segment...
(not opinion but data), so then we can said :

it's is nothing to worries about nvidia do a great job finding solution for the design weakness.
or
this card should not use in SLI, or run 1440p, or recommending using large page file to helps ease the effect..

that more important IMO,

came on Toms are this don't pick your interest to do some testing...
find the truth, search for workaround solution, etc... 😀


edit: can someone correct me.. aren't 970 is second. strongest gpu from nvidia line up aside 980 as the top card.. right now..
I came hard to believe second most powerful card only meant to be played in 1080p... (yeah some people said that)..
 
I purchased this card after months of research and waiting for it to be released. Yes, I DID decide on this card because of the price per performance and that you can pretty much boost it to a stock 980. This also gave me future-proofing comfort. I want this card to do as originally advertised, and 4gb of VRAM will be NEEDED with these new games (for which I planned for with the 970 and its specs) and especially how quickly they are getting more and more capable of using the entirety of our cards. Now I'm told haha you thought you had 4gb but we lied! I understand that we have 4gb of ram until the 512 is needed.. but.. you didn't say "hey, well.. you get 4gb.. but.. there's a catch.. sometimes you wont.." Yeah thanks I'll pass. With 4k technology, and AA being used more, and games requiring these specs.. I don't see how the card I purchased with its original specs to last me through 2015. This pisses me off and I am HUGELY disappointed in NVidia. I've been loyal to your company. The least you can do is be loyal to the people who support it. I just hope I can get my money back, and purchase a card that doesn't have fake specs. This is the same feeling you get when you get scammed. They need to notice their mistake and actually do something about it.
 


Call them up dude. Explain what you think and hopeully to keep you as a loyal customer they will do something for you. If not, AMD also makes capable card as well!!!!

 
Hi Don,

My question is, when can we expect you guys to test these new revelations? It would seem to me that Nvidia's GTX 970 has a unique way of accessing its memory. This should warrant further testing. A quick trip over to Newegg shows that the 970 is still advertised as a 4gb card (which technically it is). But given this new information and unique vram access, testing the 970 vs 980 at their advertised vram limit would make sense since your original review did not. There are a lot of people who need to know how this GPU will perform at/near its 4gb limit. As a tech journalist, enlightening your readers on topics they are interested in should be your primary goal. Are you guys planning on doing some actual testing or should I look elsewhere? Thanks for your attention.
 
One of the most powerful videocard ever made for such a low price point. I think costumers should take it as it is and be happy. It's false marketing but it's still great performance for the money. Not like FX 81xx when first came out telling 2 billions of transistors but then it was 1.2 billions.
 
This pretty significantly limits the longevity of the card. Sure, it's killing all the games that are out now, but in 3 years the card will not be nearly as useful as it would have been with the 4G of full-speed memory advertised.

That's the biggest problem.
 
This wasn't an accident. It takes very deliberate action to perform the extra engineering to make a partitioned memory system. You have to get both the hardware engineers and the software engineers fully involved or the thing isn't going to work. How well the partitioned memory on the GTX 970 is working should make it clear as to how well NVIDIA understood what they were doing.

From an engineering standpoint, I would rather have the 3.5 GB of memory on the card rather than the added complexities of juggling that last 512 MB, as anywhere you add complexity you have that potential for unforeseen problems. Is this a case of marketing or management telling the engineers what numbers they had to achieve for the card, rather than the engineers just doing their job?
 
The change does make a difference; to say otherwise is inexcusably short sited. Some of the major points of these second tier cards revolve around their life expectancy in a system.

1. People buy these cards with the expectation that they can crossfire/sli at a latter date and maintain or increase their 40FPS minimum without purchasing next-gen hardware for at least another generation.

2. These cards were marketed on their ability to perform at higher resolutions particularly in SLI at 4K; while they are mostly capable of this now the reduced specs will result in performance decaying faster at high resolutions than tge much older AMD R9 200 cards.
 
Don, I see that you have now corrected the table to read:

224 GB/s aggregate
196 GB/s (3.5 GB)
28 GB/s (512MB)

But this is still incorrect. As Nvidia itself admitted, both segments can't be used at the same time, so you cannot therefore add the two bandwidth numbers. It's one OR the other at any given time. Anandtech (now your sister site) has an article saying exactly this. Saying "224 GB/s aggregate" is at the very least misleading.

I think that at the end of all this misleading situation, reporters should be the first to be accurate.
 
Update:

I investigated further and discovered that the ONLY real issue is the memory partitioning. Furthermore all reputable sources are reporting that this at worst reduces the frame rate by 3% over what it would have been with a full ROP count and full speed access to all 4GB.

3%!

So people are whining about only having 3.5GB (not quite true) and having their games suddenly start stuttering if using between 3.5GB and 4GB.

*On the stutter issue (which isn't confirmed but MAY be true):
"Nvidia clamied this would be around a three percent drop. It's worth noting that Nvidia's benchmarks only looked at average FPS performance, which may not account for the frame stuttering that some users claim to be experiencing."

So it's still a little confusing.

I like NVidia but I'm not a fanboy. I'm not too concerned with this but I will be looking closer and won't get a GTX970 unless there's a fairly simple fix (if it's even an issue).
 
"256-bit
224 GB/s aggregate
196 GB/s (3.5 GB)
28 GB/s (512MB)"

Thats misleading. Its not a 256 bit bus, its a 224 or 32 bit bus. And its not 224 GB/s aggregate, its 196 or 28.

The buss is physically incapable of running in a 256 bit mode. And its physically incapable of transfering 224 GB per second. Since its either or, you cant just add the two together.

The most accurate thing woudl be to list the bandwidth range worst case to best case. IE a range of 28 to 196 GB/s. But as the 28 number is much rarer then the 196, thats not exactly fair.

Best way i can come up with is to list a range for bandwidth that is the average worse case and the maximum. The maximum possible is 196. The average worst case average should be a weighted average of 7/8s the fast speed and 1/8 the slow speed. Which would be 196*7/8 + 28*1/8 = 179MB/s.

So, i would list the card as having a bandwidth range between 179-196GB/s. You could do the same for width and come up with 200, but that number is a lot more meingless, it doesnt really matter in context how many bits it uses.

What a cluster tho, woulda just been easier for them to call it a 3.5 gig card with a 224 bit bus and a 196 GB/s with a 512 meg un-unified system ram(i duno what to call it, its not a L3, its not a cache of any kind, its not a buffer, its a separate thing, needs a new terminology)
 
Status
Not open for further replies.