So Nvidia lied about the GTX 970 for months

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.


err no, the whole thing is not an evil conspiracy... if there is any conspiring going on, it would be that the "recycling" of imperfect 980 chips came out a little less than expected, and Nvidia decided to do a little false marketing to sell more 970s.

that aside, your comment / opinion about using us as consumers is entirely off track. as Paladin has stated, designing and tapping out a die is INSANELY expensive... and this same type of die recycling have been in use for over a decade, not only in GPUs but ALSO CPUS. FYI, the 980ti (if it happens) will be a fully enabled GM200 chip with dp disabled to prevent it from stealing market share from the new Titan... selling such a 980ti actually hurts Nvidia profit wise, and they'll likely only do so if AMD's new flagship is significantly more powerful than the 980
 


in light of the recent architectural notes from Nvidia, I would like point out that the 970 may in fact not perform to the advertised levels, particularly in cases of SLI. the average fram rate graphs disguise the microstutter issues some users have encountered (allegedly). Personally, I would like to see more in depth SLI performance analysis before passing judgement. with games using more and more VRAM as the current trend, we can already see where the full 4GB of VRAM may be needed for 1440p and above resolutions. further more, you can imagine the problem of the "bad" 0.5GB of VRAM getting worse the more cards you have in SLI, as every time each one of the cards use that VRAM section, you may encounter microstutter. Now I'll admit that this is all speculation on my part, which is why I would like to see fraps trace analysis of 970 SLI set ups from reviewers before I pass my final judgement
 
I'm not mad, but I think I see what NVidia may be doing as I stated in my post above -- each lesser card got something new on them memory deal with the 970 buss deal with the 960 your testing the memory thing with the 970 and now as you see all the feed back from it and NVidia is working on the fix

then you got the 960 with the buss thing and those guys will feedback on that and once NVidia gets it all to work right it will be implemented all on one new card --

that's just how I'm looking at it now somebody's got to be the Ginny pig for all this why hire a R&D guy when you will pay them to do it ???
 


or perhaps the way the bad SMMs were cut off of the 970 makes a normal bus connection to a full 4GB of memory impossible? but they made the 970 anyway because they wanted to recycle the imperfect 980 chips

on the other hand, the way the GM206 chip was designed was for the 128bit bus, they needed to do the "faster memory thing" as you call it to actually get the card to perform fast enough for it's price bracket (it still under performs IMO)

yes, it's all cost cutting strategies. but it's not the huge conspiracy that you're talking about
 


Not exactly a pro review here, but my experience with my sli'd 970's at 5760x1080 was worse then everything I saw in reviews, In fact i spent a good amount of time ensuring everything on my system was in proper working order and nothing was wrong. They simply did not perform at the levels I was expecting them to. Also anyone who is saying what do you expect for the cost, well the same could be said of AMD 290x vs 290. At launch the 290 was significantly cheaper then the 290x and performed very well. People like myself were expecting the same kind of match up just with Nvidia instead of AMD. I still say they probably get that at 1080p, but the higher resolutions the 970's "flaws" will be felt.
 
its all just part of the NVidia master plan -- look at if ever the new AMD cars with there new memory set up. only time will tell how well that's going to work out for them but still as you pointed out this chip cutting is how it always worked out to make the lesser card but then did this issue occur with them also ?? did the 700 cards have Maxwell chips too?? so if the cut back chips on them did not do this did they ?
 
@ getdamafiaonyou

you know I look at threads like '' whats your heaven scores'' and I look at a lot of these big cards and notice that they get them high fps like say 120fps but I also see the low fps is just as bad as my lesser card say it hovers around 20 fps that's a big high to low so I look at my poor old 7850 same settings and I get say 55 max fps but also my low is 18 so I look at that as a smoother run cause my drop off is not a severe????

going from 125 fps to 23 fps compared to 55 to 18 only thing that's maybe better over all is the average fps on the better card where mine is say 26 fps and the better card is like at 36 fps and then I look at the price diff. my card was 150 bucks and the better card is 350+ bucks ???

so who really got there moneys worth ???
 


Synthetic benchmarks dont really mean anything though. My trifire setup doesn't get that great of scores (in my opinion) in fire strike, but the game performance is outstanding. I think it's all relative to. Some people think Ultra at 20fps looks good, me I like to to be smooth ultra. I'm not saying the 970 is a bad card, and frankly I think all this controversy is about something that's really only going to be felt by maybe 2% of owners, however I was one of them and I think anyone considering a 970 should decide if they want to go sli or higher then 1080, if they answer yes I would advise them to look elsewhere. But for 1080p I think it's still a solid choice despite everything for its low temps, lower power draw, and solid 1080p performance.
 
Second generation Maxwell from winki
Nvidia revealed that it is able to disable individual units each containing 256KB of L2 cache and 8 ROPs without disabling whole memory controllers.[19] This comes at the costs of dividing the memory bus into high speed and low speed segments that cannot be accessed at the same time for reads because the L2/ROP unit managing both of the GDDR5 controllers shares the read return channel and the write data bus between the GDDR5 controllers, making either simultaneously reading from both GDDR5 controllers or simultaneously writing to both GDDR5 controllers impossible.[19] This is used in the GeForce GTX 970, which therefore can be described as having 3.5 GB in its high speed segment on a 224-bit bus and 512 MB in a low speed segment on a 32-bit bus.[19] The peak speed of such a GPU can still be attained, but the peak speed figure is only reachable if one segment is executing a read operation while the other segment is executing a write

http://en.wikipedia.org/wiki/Maxwell_(microarchitecture)

wonder how long this been written up with them ?? seems they knew about it but theres no date to tell if its been known for awhile by them or just got edited today for the 3.5 issue ???


ok I see it was added by anandtec recently so its a new paragraph to the original article
 


It's not the SMM's but the disabled ROP cluster. All eight 32-bit GDDR5 memory bus's are functional but the physical control logic from the crossbar to the memory controller goes through the ROP unit, it's how you can get L2 caching. Think of it like this. Crossbar serves memory request to the memory controller, that request is first passed through the cache to see if it's present or not, the ROP unit is what will control what's in that cache. If there is no ROP unit / L2 cache then the crossbar can't directly talk to the MC. So what happens is another ROP / L2 cache unit needs to do the controlling for it, essentially the eighth MC is slaved off the seventh ROP / L2.

That is why they did it, because otherwise each memory request would take twice as long as you'd have to wait for the seventh ROP/L2 unit to first address it's own MC, then address the slaved MC. That is very bad for overall performance. So instead they just bonded the first seven MC's together into a 224-bit memory bus, and made the eighth into a glorified L3 victim cache. The drivers keep new data on the 224-bit memory bus while moving old evicted data to the 512MB bus before finally chucking it out of memory entirely. It's a performance enhancement because otherwise they would have to cut off the eighth bus entirely and you'd just get 3.5GB of 224-bit GDDR5 memory and no large fast (compared to system memory) 512MB victim cache.
 
It's all another example of why I never buy the first iteration of a new series. Always wait until the end of the generation, after all the bugs and surprises have been worked out. It worked for Fermi with the GTX 580 (over the 480), Kepler with the GTX 780 Ti (over the 680), and now with Maxwell. I've been looking forward to the GM200 chip and the next GTX high-end video card. With a GTX 780 Ti, there has been no reason for me to jump on the early versions of Maxwell.
http://videocardz.com/54358/nvidia-maxwell-gm200-pictured

I tell you what, all AMD needs to do now is drop a big 380X cluster bomb in the middle of this chaos. I don't think they're ready, which is too bad for them. By the time the 380X does drop, this will all be forgotten, given the typically short attention span these things command.
 
I don't think it has to do with 970's specs or performance itself, but to do with nVidia's negligence/incompetence/possible deceit that people are worked up about, especially those who bought the card due to what it says on the box.

On the card itself, I agree, I personally think games that will use precisely between 3.5GB and 4GB is going to be few and far inbetween (different story if it is 3GB VS 4GB argument), so I doubt that there would be any significant number of games are unplayable on 970 but playable on 980. It's those who based their purchase decision on the 4GB that got hit the hardest.

Basically, a lot are mad at the company, not the card.
 


Why though? It doesnt really matter. The 970 is still the best card for the price.
 
But, I thought the reason people are upset is because it doesnt match specs. Its 3.5gb, not a full 4. It will achieve 4 in a half baked way and with a hit on performance compared to a true 4gb. Also, lots of users seem hellbent on revenge now because Nvidia say stick it up your butts to everyone who has a problem with it.

"Oh, we made a boo boo but we really don't care. Refunds? Pfft, nope".

that is basically their message, and also the insane news that Nvidia GPUs are not Nvidia products ruined my trust in them. Like going to buy a Panasonic TV and having an issue, going to Panasonic support and being told "whoops, thats not our product, we only make one tiny little component in the tv, so we won't be helping you at all. Have a nice day."

Those two reasons alone are more than enough justification to give Nvidia the big middle finger right back and force refunds on them. Your product, your logo, your branding, the box says your name on it, the card says your name on it but you say its not yours. Too much deception happening, IMO.
 


But thats not true, it does have 4gb, its just that the final 0.5 chip has lower bandwidth due to the way they disable the cores (changing from a fully enabled 980 to a 970).

The remaining chip can be used and they have made it lower priority than the faster 3.5Gb.

The specs are pretty much irrelevant if it performs how it should IMO.
 
It is still deception, though. I totally understand and agree with you as well. I think the 970 is overkill for my needs and represents an insane value in price to performance. You can strip that 0.5gb off the card completely and it would still be a great deal. But, the way Nvidia has handled this is proof that future problems will never be solved. If a company is willing to pee on all of its customers even after they admit to a mistake ( regardless of how significant ) then I think that company needs to be put in their place. It is very clear as of right now that they could care less. So, why should any 970 owners care about them in return?
 


How are they peeing on their customers? By selling a card that performs very well for a very reasonable price? But they got the spec sheet slightly wrong?

Everyone is just overreacting IMO, its really not the issue that its been made out to be.
 


Also in relation to this, its ridiculous going to nvidia for a return or refund! They don't make the graphics cards, they just sell the GPUs and design to 3rd partys, like Asus/EVGA/MSI etc. Its them you should contact for a return/refund.
 
You don't have to keep saying its overblown. Im AGREEING WITH YOU. Repeat. IM AGREEING WITH YOU. However, I am also not cool with paying for something that isn't what it said it was no matter how insignificant. The GPU market is based on power and FPS, cards come with overclocking and each frame seems to matter to customers of higher end gpus. If there is a hit on graphics performance, which there, then you should be privy to some sort of refund if you don't want the card anymore.

I'm taking the middle ground, both sides are right. Nvidia is right to say its not even worth talking about. But, if you paid $350 for something and didn't get a product that is 100% what it said it was, then you should be entitled to a refund.
This type of thing is only going to happen again, Nvidia will do it again with the next generation because they know they can now. I say, Stop them before it happens again. Nvidia denied refunds and you only have like 3 mods on their forums who said they will try to help. NVIDIA should say they will do anything to help. Instead, they said TOOO BADDDD SOOOO SADDDDD to anyone who wants a refund. If you don't know how or why they are peeing on the customers, go chat with a support rep and see for yourself what they've been instructed to tell you regarding the issue.

That is wrong in my view.
 


They are saying too bad, BECAUSE THEY DON'T MAKE THE CARDS...
 
@ 17 sec.
''I tell you what, all AMD needs to do now is drop a big 380X cluster bomb in the middle of this chaos. I don't think they're ready, which is too bad for them. By the time the 380X does drop, this will all be forgotten, given the typically short attention span these things command. ''


ya. even if amd delays the release to insure that it delivers when they do ... AMD may look at all this and figure they don't want or cant afford all this fiasco when theres hit the market .. and the 300 cards look interesting and I do want to see that new memory get up on them in action [ HBM 3D stacked memory ]

bad thing about amd is it seems there staff stability seems poor .. guys leave, guys come in like no one wants to be there very long term .. hard to get a program set in stone like that or a guy starts something and somewhere along the line he leaves and now his stuff is kinda up in the air cause of that ???

I'm no fan boy I just go with who got the better whatever when it comes time for me to buy.. all I ask is when I do use it the item works as expected and as advertised then I'm good with it ...



 
Status
Not open for further replies.

TRENDING THREADS