The Real Nvidia GeForce GTX 970 Specifications

Status
Not open for further replies.

TeamColeINC

Reputable
May 6, 2014
71
0
4,640
3
I read that the last 500MB of VRAM actually runs significantly slower than the first 3.5GB, bringing the actual memory bandwidth just under 200GB/s (when all VRAM is being utilized)
 

Lmah

Honorable
May 3, 2013
472
0
10,960
87
True it is false advertising, I'm not sure why one of their technical guys didn't give the marketing team the proper specs to begin with. Their story seems full of holes.

Anyways the 970 is still a beast of a card at a fair price. This news doesn't change that fact.
 

rdc85

Honorable
Apr 29, 2012
2,943
0
13,460
218
Well about time..
I was hoping to see any article about this, when this problems get a wind starting at mid of January..

now this led to second question.

are tom's going to dig to this maters..
try do own independent benchmark/testing..
Time variance / dropped frame (preferred), alongside the usual fps test..

Since in other forum some people said it having sort of issue when the game/apps somehow break the vram 3.5 Gb barrier..

and some 970 SLI user said the gaming experience is much worse in certain games
compared with single 980 setup. where 970 SLI supposed to be more powerful than single 980.

Last I'm requesting this since I'm put more trust on your testing/review :D
Right now the info we got is some post from user at forum and
NVIDIA press release..
 

rdc85

Honorable
Apr 29, 2012
2,943
0
13,460
218


they said the last 512mb segment around 1/10 or 1/8 speed (about 20GB/s)
to make it worse it cannot being accessed in parallel..
so when the last 512mb somehow need to be used/accessed
all other rams segment have to wait till it finished (slow). this caused some hiccup that people experiencing..

need more digging into this...

edit: forget to add info, this based on NVIDIA press release/statement I believe,
there even have some block diagram explaining the last 512mb segment is cut off from other..
(the way it accessed)
 

DCNemesis

Honorable
Jan 22, 2014
4
0
10,510
0
I think that this author's article is ignoring the issue, and not appropriately focused on the facts regarding of this omission of information.

As soon as you start gaming at 1440p or 4k, this segmented RAM issue *WILL* affect performance.

Any game that uses large texture files (read Skyrim mods, etc) will start to approach and exceed the prioritized 3.5gb threshold. As soon as the VRAM usage exceeds that 3.5gb, the remaining 512mb is 1/8th the speed of the primary segment.

This is a serious misrepresentation of the product on NVidia's part. This article is much less objective, impartial, and scrutinizing than I would expect from this website and its reviews and authors.

The car analogy is inaccurate and the NVidia shill is inappropriate.

I expected more from both NVidia and Tomshardware.

Too bad.
 

CaptainTom

Honorable
May 3, 2012
1,563
0
11,960
68
Except that it ISN'T that fast. Reviewers can't test for every scenario and they clearly missed a few benchmarks that apparently make the 970's performance fall off a cliff.

What happens in a year when most games are able to utilize the full 4GB and the card starts benching FAR lower than it is now?
 

none12345

Distinguished
Apr 27, 2013
431
2
18,785
0
My guess is this will mostly manifest itself as a matter of microstudder when you are running with high quality textures. You need to find a game that has a texture set designed for 4 gigs of memory tho.

Which that might be hard today, it wont be hard tomorrow. It will be more and more common to target 4 gigs of memory. A few years down the line, i woudl bet that newer games start showing some drastic performance swings between frames.

With 512 megs of very slow memory compared to the rest, that will definitely be a performance impact. If you are running near 4 gig of ram.

If i had bought this card, i would rather just have 3.5 gigs and have the last 512 disabled(or used as a hidden L3 cache, but not reported externally). its just going to toss in inconsistent performance, under certain scenarios.

If i owned one, and this was before the year say 2007 or so, i probably wouldnt care. As i was upgrading my graphics card every year or every other year. But that was back when they were making huge leaps each cycle. So, who cares about future games, the card likely wouldnt still be in use by then.

But these days, with the slower cycles, and the smaller hops each cycle, i keep cards much longer. So, if i just bought this card. I could easily see having it 4-8 years down the line. In that case, one would probably run into a lot of games with 4 gig of textures, and in that senerio if i owned the card, i would feel more then a bit betrayed.

But i don't own one, so i guess i cant complain. I know ill definitely keep this memory segmentation issue at the front of my mind when i upgrade my GPU. Im due for an upgrade, just waiting to see the next gen ati stuff, before deciding. This has definitely put a big black mark next to nvidia tho. I'm still kinda pissed off at them for the way they handled the drivers for a 6600gt i owned way back(every single driver for it had 1 of 2 major problems, either the 3d performance was horrid and the 2d worked, or the 2d performance was horrid and the 3d worked, and it depended on what game you played, one driver would be one way for one game, and the opposite for another, but you could never have both work in the same game correctly); its what made me swear off nvidia for awhile and switch to ati(after buying only nvidia for about 8 cards in a row)
 

DookieDraws

Distinguished
Oct 8, 2004
1,213
33
19,690
150
Makes you wonder how many other computer-related components out there have "incorrect" specs. Did NVIDIA make a mistake? Only they know the truth, but it's good to see people finding out about this and making a little noise around the web about it.
 

Fudgemuch

Reputable
Nov 17, 2014
2
0
4,510
0
Does this apply to the aftermarket cars? I get 70fps average (65 minimum) with no overclock on Shadow of Mordor at 1080p with the HD texture pack and all settings on the highest option on my MSI GTX 970. I've played 35 hours of that game and there have been no microstutters that I or the Shadowplay fps monitor can pick up, and my VRAM is at a consistent 3.6GB.
Did MSI fix the problem, or is it just so small a problem that there's no perceivable difference in one of the best looking games available?
 

toddybody

Distinguished
Dec 13, 2010
1,201
1
19,960
240
I appreciate Tom's statement of the obvious: The 970 is as impressive today as it was before the 3.5GB/500MB VRAM analysis.

What plain sucks though, and isnt stated boldly enough in the article, is that all this is being chalked up for "bad marketing" or some communication error. NO. Tell me that the highly intelligent folks who work GPU design made an honest mistake in reporting specifications...ridiculous.

Funny enough, I bought 2 970s over the weekend and woke up to the early damning 500MB access reviews on Sun. Having 30 days to return them, I'll be anxiously waiting for nVidia's response on the issue...and hopefully an incentive for disgruntled 970 owners to grab 980s, or the yet to be announced 980ti.

Have owned nVidia GPUs exclusively for past 5 years...I'm ashamed of them right now.
 

rdc85

Honorable
Apr 29, 2012
2,943
0
13,460
218


I think that is your answer..

NVIDIA solution for the design weakness is to avoid using the last segment of ram if not absolute necessary..
(via driver or something)
some people crank up the resolution to force it go above 3.7-3.8 and they see the "shuttering"
(or something called DSR).

also having big pagefile on SSD also helps it seems,
(they notice the pagefile increased largely after go above vram barrier)
so try limit the pagefile to 2-4 Gb and soon after their system crash..

BTW don't take this as an attack to NVIDIA,
we pressing this matter to force NVIDIA to produce better solution and better product in the future..
(some people indeed have issue and still searching for answer/solution)
 

tpi2007

Distinguished
Dec 11, 2006
475
0
18,810
6
Don, you forgot to not only mention but also correct in the table that the card's memory bandwidth is not 224 GB/s, but rather 196 GB/s for the 3.5 GB segment and a max. of 28 GB/s for the second segment. And it's likely less than 28 GB for the second segment as Nvidia has said the smaller resources available to access it make it slower. Oh, and the fact that you can't use both segments at the same time.

So, also using a car analogy, it's like saying your car can do 196 mph in fifth gear and 28 in first gear, therefore your car's top speed is 224 mph. Everybody knows you can't do this math, but somehow some sites are letting Nvidia have a free pass on this item.

And yes, it does matter, because it changes the value proposition when making a buying decision. People thought they were getting an awesome deal because the card only had less CUDA cores, but the whole rest intact to help it out in the future. Now it performs well, but what about in one year with new games using more VRAM coming out each month ?

Also, don't forget the importance of PR: what were the real effects of announcing a card with the same number of ROPs as AMDs counterparts that had been available for a year ? The message: this card is well prepared for high resolution rendering. And the full complement of L2 to help ease the concerns about a 256-bit memory bus and apparently low memory bandwidth (compared to AMD's offerings). Remember that part of the PR was emphasizing Delta Colour compression, which practically nobody had heard about until then, but that was in its third iteration already. This time they gave it a highlight, which indicates they wanted to transmit a message. Saying that the card has the full complement of L2 cache, the same as the GTX 980, would also fit that message.

Was it an honest communication mistake ? I don't know. But I do know that it does matter and it does affect the value proposition of the card in the long term (of a card's useful lifetime, that is).


Edit:

Don, I see that you have now corrected the table (it's good practice to note article changes, which you didn't as of this edit) to read:

224 GB/s aggregate
196 GB/s (3.5 GB)
28 GB/s (512MB)

But this is still incorrect. As Nvidia itself admitted, both segments can't be used at the same time, so you cannot therefore add the two bandwidth numbers. It's one OR the other at any given time. Anandtech (now your sister site) has an article saying exactly this. Saying "224 GB/s aggregate" is at the very least misleading.

I think that at the end of all this misleading situation, reporters should be the first to be accurate.

[Answer By Cleeve]
Fair enough, I've removed the 'aggregate' spec.
Keep in mind it's general practice to describe dual-GPU cards as having a 512-bit aggregate bus when each GPU really mirrors a 256-bit bus, so I considered it in the same vein. But honestly I've never liked that practice myself, so I'm quite OK with dumping it.

As for your (and others) concern with the car analogy, I stand behind it. Real-world measured performance is the metric that will always matter most to me. In the case of a car that's the quarter mile, and in the case of a graphics card, it's frames-per-second. The benchmarks we measured in frames-per-second have not been changed by this revelation, or rendered any less accurate, so I'm going to have to agree to disagree with you as to the merit of this metaphor.

As far as the merits of PR, I think you might be overestimating that vs. raw performance. Nvidia has released other asymetrical cards in the past and I never got the impression that the public boycotted them because of it. If they *were* avoided, it was because they were slower than the Radeon competition, plain and simple. It should come down to frames-per-second.

But once again, everyone is free to disagree and their own opinion. I'm simply calling it as I believe it to be. I would probably feel different if the company had a history of lying about technical specifications, but I can't recall something similar in the last 20 years or so. I *can* recall them owning up to other, strange memory configurations with similar limitations, so it doesn't seem logical to assume they decided to blatantly lie this time around when they previously came clean.

But who knows? Regardless of what any of us believe, by design or by accident, Nvidia has tremendous mindshare to earn back if it wants the public trust. This kind of mistake should be taken very seriously. If it ever happens again, I don't think anyone would believe that it's an accident.

But to my mind, that doesn't affect the GTX 970's proven performance, nor make it any less desirable for the money. If you feel it does, more power to you. Your opinion is as valid as my own as long as you have valid reasons to justify it.
 
Interesting, if I had the gtx 970 and it performed as advertised then I could care less about some misinformation.

But, i'm not saying that it's a good thing. I hope it doesn't happen again but I don't think it needs to be such a huge thing.
 

vmem

Splendid
My personal 2 cents as a owner of a gtx 970 in a custom water loop:

1. the 970 performs like I expected from reviews, runs all my games smoothly for my 1440p panel, and I do see full 4GB of VRAM usage from just turning the settings to 'ultra' on shadow of mordor

2. as some people have pointed out, a large page file on an SSD seem to mitigate the stuttering some what... I am running a 40GB page file and have seen over 18GB written to it occasionally

3. For anyone who is thinking about getting a 2nd card for SLI as an upgrade path, things get a bit dicier. that 'slow' 0.5GB could really affect us (yes, I originally wanted to go SLI)

4. am I all butt hurt and crying over this? no. do I feel like Nvidia mis-represented their product? yes.

what do I want from Nvidia?

MINIMUM: Some firmware or driver update that solves the stuttering issue for people who experience it. if it means limiting the GPU to only write to the "good" 3.5 or 3.6 GB of VRAM, so be it

Issue a public apology, don't just hide behind "we didn't know about this". Even if it's an honest mistake, own up to it and at LEAST apologize and promise that you'll do better to test these issues in the future

Ideally I'd like some form of compensation but I realize chances are slim and I purchased the card at my own risk. but if Nvidia cannot even apologize for their mistake (honest or not), then I'd say all this money grubbing over the past couple of years have gotten to their head and the company is headed for an ugly fall.
 
Status
Not open for further replies.

ASK THE COMMUNITY