Don, you forgot to not only mention but also correct in the table that the card's memory bandwidth is not 224 GB/s, but rather 196 GB/s for the 3.5 GB segment and a max. of 28 GB/s for the second segment. And it's likely less than 28 GB for the second segment as Nvidia has said the smaller resources available to access it make it slower. Oh, and the fact that you can't use both segments at the same time.
So, also using a car analogy, it's like saying your car can do 196 mph in fifth gear and 28 in first gear, therefore your car's top speed is 224 mph. Everybody knows you can't do this math, but somehow some sites are letting Nvidia have a free pass on this item.
And yes, it does matter, because it changes the value proposition when making a buying decision. People thought they were getting an awesome deal because the card only had less CUDA cores, but the whole rest intact to help it out in the future. Now it performs well, but what about in one year with new games using more VRAM coming out each month ?
Also, don't forget the importance of PR: what were the real effects of announcing a card with the same number of ROPs as AMDs counterparts that had been available for a year ? The message: this card is well prepared for high resolution rendering. And the full complement of L2 to help ease the concerns about a 256-bit memory bus and apparently low memory bandwidth (compared to AMD's offerings). Remember that part of the PR was emphasizing Delta Colour compression, which practically nobody had heard about until then, but that was in its third iteration already. This time they gave it a highlight, which indicates they wanted to transmit a message. Saying that the card has the full complement of L2 cache, the same as the GTX 980, would also fit that message.
Was it an honest communication mistake ? I don't know. But I do know that it does matter and it does affect the value proposition of the card in the long term (of a card's useful lifetime, that is).
Edit:
Don, I see that you have now corrected the table (it's good practice to note article changes, which you didn't as of this edit) to read:
224 GB/s aggregate
196 GB/s (3.5 GB)
28 GB/s (512MB)
But this is still incorrect. As Nvidia itself admitted, both segments can't be used at the same time, so you cannot therefore add the two bandwidth numbers. It's one OR the other at any given time. Anandtech (now your sister site) has an article saying exactly this. Saying "224 GB/s aggregate" is at the very least misleading.
I think that at the end of all this misleading situation, reporters should be the first to be accurate.
[Answer By Cleeve]
Fair enough, I've removed the 'aggregate' spec.
Keep in mind it's general practice to describe dual-GPU cards as having a 512-bit aggregate bus when each GPU really mirrors a 256-bit bus, so I considered it in the same vein. But honestly I've never liked that practice myself, so I'm quite OK with dumping it.
As for your (and others) concern with the car analogy, I stand behind it. Real-world measured performance is the metric that will always matter most to me. In the case of a car that's the quarter mile, and in the case of a graphics card, it's frames-per-second. The benchmarks we measured in frames-per-second have not been changed by this revelation, or rendered any less accurate, so I'm going to have to agree to disagree with you as to the merit of this metaphor.
As far as the merits of PR, I think you might be overestimating that vs. raw performance. Nvidia has released other asymetrical cards in the past and I never got the impression that the public boycotted them because of it. If they *were* avoided, it was because they were slower than the Radeon competition, plain and simple. It should come down to frames-per-second.
But once again, everyone is free to disagree and their own opinion. I'm simply calling it as I believe it to be. I would probably feel different if the company had a history of lying about technical specifications, but I can't recall something similar in the last 20 years or so. I *can* recall them owning up to other, strange memory configurations with similar limitations, so it doesn't seem logical to assume they decided to blatantly lie this time around when they previously came clean.
But who knows? Regardless of what any of us believe, by design or by accident, Nvidia has tremendous mindshare to earn back if it wants the public trust. This kind of mistake should be taken very seriously. If it ever happens again, I don't think anyone would believe that it's an accident.
But to my mind, that doesn't affect the GTX 970's proven performance, nor make it any less desirable for the money. If you feel it does, more power to you. Your opinion is as valid as my own as long as you have valid reasons to justify it.