InvalidError :
somebodyspecial :
4K is the job of then next gen after a die shrink, not 28nm and even then they'll only make me happy probably SOME of the time.
I do not think making you happy figures anywhere on AMD's agenda.
GPU improvement has been mostly stagnant for the past 3-4 years from being stuck at 28nm and the way I look at Fury, I see it like an open-beta for HBM: HBM will become necessary to get the most out of 16nm when it becomes available. Having it out in the wild a year before 16nm hits gives AMD that much more time to review their future HBM(2) and architecture plans based on field results. Fury might be a questionable success but that's alright since it was apparently never meant to shatter frame rate records in the first place with its 64 ROPs limiting it to the same pixel fill rate as the R9-390X.
In principle, the 980Ti's 96GP/s pixel fill rate (vs 67GP/s for Fury) make it much better suited at driving 4k, albeit at possibly lower details.
You should always be worried about making your customers happy as a business (as many as possible, period - and I'm in the 95% or so running 1440p or less). A test project that makes no money was not what they needed this year. They needed tried and true GDDR5 which has plenty of bandwidth today, is cheap and should have given more rops, more dedicated to the GAMING side etc, as WINNING should have been the goal period as your market share slips yearly. If winning isn't your goal when on the edge of bankruptcy, fire your management...LOL. Not even sure we'd need anything more than faster GDDR5 at 16nm. NV clearly gets a ton from GPU overclocking still (20% more from 980ti when overclocked) and they could have gone to 512bit next, while currently they sit at 384. Heck they went back to 256 with 980, due to better compression algorithms etc.
Stagnant for 3-4 years? LOL
http://www.guru3d.com/articles_pages/gigabyte_geforce_gtx_980_ti_g1_gaming_soc_review,15.html
Even 780ti is 1/2 speed of 980ti and GTX 680 came march 2012. Just over 3yrs ago, and is handily beaten by 780ti right?
http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review
http://www.anandtech.com/show/9306/the-nvidia-geforce-gtx-980-ti-review/6
A better example since anandtech has both 680 and 980ti shown here. 680 scores 22.5fps 1440p, while 980ti scores 85.4. Almost 4x faster, and you call it stagnant. Never mind if I went back a full 4yrs (it wouldn't be 680 then), and 580 is in that list to at 15fps...I guess we have MAJOR differences in progress. That is just one example but you should get the point. Min fps shows same 12fps for 680, and even more spread with 980ti 56fps (over 4x faster!). Can't show 4K because 680 can't do it (more mem due to progress there counts as part of the end product too), otherwise it might get worse and I'm not saying 980ti is 4k able...LOL (it don't count if I'm turning crap off the devs wanted me to see). Even when you come to a game like civ5 it's 2x faster than 680, and nearly bottlenecked by cpu. I think the worst they get is 2x faster in any of the games. Also note when 28nm started they barely got 4.3Billion transistors in them (radeon 7970 Dec 2011 gpu $550 and the same 384bit bus...LOL), today they have 8.9Billion. Now a small part of that is 5.5ghz mem on those (7970) vs. todays 7ghz, but we're still talking mostly gpu that got them to 2-4x faster in todays games. There have been a lot of major modifications that add up to a multiple like that, so not you seem confused.
A year early on HBM1 means nothing when going to a new die shrink, finfet+ and HBM2 all at once. This experience wasn't needed, (probably useless when taking in all 3 changes mentioned, might be different SLIGHTLY if it was HBM1 again but it isn't) and just wasted cost and appears to have limited production to where they can't even get a sample to toms for more than a day, maximum pc said they had a single card for 4 of their sister sites and had no time to even share it (LOL) etc. How much is any experience worth if it steals even more market share from you for another year, and makes LOSSES on top of that problem? I don't need experience that badly at the expense of everything else...LOL. IF I'm AMD, I need sales, and profitable parts in those sales, and R&D on the GPU CORE itself to WIN vs. the other guy without WATER, who clearly has shown there is ample bandwidth in GDDR5 even at 384bit bus for cards this gen. IF HBM was a fix for a problem that REALLY existed (as in it suddenly vaulted AMD's fury to 50% faster than NV because bandwidth REALLY was an issue), I would be all for it. But that isn't the case, thus again management is stupid and NV wins again doing the smarter moves. IE, GDDR5. Previously NO CONSOLE (went mobile instead for last 5 years, now getting somewhere with it). Better drivers (and day 1 for all major games), WHQL drivers monthly, Gsync (freesync sorta works) etc etc.. Pascal will have 3x mem bandwidth of maxwell, but it won't be 3x faster in any game IMHO. I could be wrong but I would not bet on it.

I'll be shocked if it's 2x maxwell2 in any game as I don't think NV will risk max sized dies on the new process first. I might not say that if there was mass evidence we are memory bottlenecked NOW, but we are not clearly. IE, when NV went to 28nm 680 they went up only .5B transistors and 294mm^2 die. In the 680 article one of the many changes mentioned is the fact that NV managed to get a CHIP & board made that was capable of 6ghz operation. Imagine what it took to get 7ghz going. From the 680 article above:
"Perhaps the icing on the cake for NVIDIA though is how many revisions it took them to get to 6GHz: one. NVIDIA was able to get 6GHz on the very first revision of GK104, which after Fermi’s lackluster performance is a remarkable turn of events."
It's not as simple as just slapping memory on it because it's faster, and suddenly it works. It takes engineering to get it all to work faster, bigger, better prefetching etc etc etc...Look at the massive chip re-org just from 580 to 680 in that article. Many things different in there that are tougher, from GPC's to polymorph 2.0 etc etc. They may seem like small changes alone to you, but in the grand scheme of things all together from 680-980ti it gets you 2-4x perf on the same process node. DX feature level on 680 was 11.0 IIRC, look at 980ti. Support for dx12 has some significant changes correct?
http://www.legitreviews.com/geforce-gtx-980-ti-dx12-feature-level-and-tier-details_164782
The differences between support for dx11.0 and 12.1 are pretty good. That siggraph 2015 vid is pretty impressive.
A bad move is a bad move. I don't think HBM2 (let alone hbm1) would have been necessary until the end of 16nm when they're cramming FAR more than 8.9B transistors in gpus. People can run GDDR up to 8ghz, so you could probably easily ship faster stuff stock and also in NV's case could go from 384 to 512b bus while doing it. Unless people started doing something stupid like adding 8K benchmarks, 512B and 7.5ghz-8ghz memory (next gen GDDR5 probably goes higher than 8ghz Oced as it shrinks, gets even more optimized etc) would be more than enough for 4K at least at the beginning of 16nm. Looking at what they've done in 3.5yrs cramming 4B in then and now 8.9B (7.9B in NV's 980ti case) I see your point. 3-4yrs later in the process yeah, I agree, but no sign you are near correct for the first rev with NV having so much room to play with and not even tapping out 384bit bus as they get exactly the 20% they oc the gpu in most games. Also note the 980ti is almost 2x faster than 690 in most stuff and does it with far less watts. I'd call that progress too. You can say it was because of process advancements, less mem chips, some design, etc etc, but it all adds up to FAR better perf than 3-4yrs ago. The fact that NV was able to go back to 256bit bus with 980 vs. 780ti and BEAT it should say something about gpu, and they did it with ~80 less watts (and 2B less transistors at $150 less price). No matter how I look at it, I say they've made some pretty nice improvements in the last 3-4yrs EASY. Even the stupid compute benchmarks show this at anandtech for 980ti vs. 680 (2-4x faster).
The worst was sony vegas in compute but that's a terrible example as Adobe uses cuda and would fare far better IMHO. There is a good reason anandtech uses it instead of Adobe which would be so easy to compare NV/Cuda vs. AMD/OpenCL (or opengl, whichever is faster for them), and it is because AMD cards like vegas despite AMD claiming a few years ago get ready for massive premiere perf coming...OK, when? They shows slides showing 45% IIRC but we've heard no peep since so anandtech avoids it and toms too for that matter (and don't respond to why they do it in the forums either...hmmpf). Not saying toms is biased here, just saying they're avoiding showing you how bad things really are (hey, AMD needs all the help they can get). Not divulging easily had info isn't as bad as saying it isn't there.
😉 I think this is the same reason toms never tests a CUDA plugin in any of their pro apps vs. OpenCL versions on AMD (which most popular pro apps have one OR MORE for each side and toms has tested many of these apps).
Even something as simple as slapping an IMC on AMD's chips years ago that allowed victory (until Intel got one too) is CPU progress to me. Along with other things it all added up to FASTER chips. Just having two competitors hashing it out year after year helps, but it's still BETTER CHIPS pretty much year after year. Maxwell 2 is nothing short of amazing compared to 680's gpu (3.5B vs. 980ti 7.9B transistors etc, same 28nm). It's not easy to make a ~600mm^2 die work. There are many factors going into the design that allow that to actually make money, while not lighting on fire...LOL. Don't forget you can put much better visuals on newer models too. You go ahead and take a 680 home if you want, but I'll take a 980ti to go thanks

Faster is faster, no matter how it happened. Even 2x the transistors doesn't explain the 4x advances in some stuff. Never mind the fact that you won't be looking at the same graphics on a 680 vs. 980ti soon with DX12.1 etc stuff. You can say they're just re-arranging or adding some more of x or whatever, but if it scores higher and can look better, it's advancement. In any case, we have vastly different ideas of what gpu progress is I guess, and I'll leave it at that.