I am not really surprised that the same people who was unable to understand the claims about dGPUs the first time, and unable to understand the further explanations during months is not going to understand anything now. Pretty sure everyone agrees on this now.
Neither I am surprised that same people who did make the silly claim that ARM cannot scale up is now attacking Nvidia Denver. Despite the fact those guys don't have any idea about what Nvidia could or couldn't do, we know that OoO has three big problems: it doesn't scale up well, consumes many power, and uses lots of die area.
What those guys don't know is that lots of interesting alternatives to OoO are under research and development. The most modern techniques provide about same performance than OoO, but consume much less power and occupy less area.¹ Nvidia engineers have proposed a very interesting OoO alternative: DCO. In one sense DCO resembles last decoupled/DLIW techniques to me. One of the more interesting aspects of Denver architecture is its ability to achieve kiloprocessor-level processing in future, because this cannot be obtained by existent commercial OoO processors (e.g. Haswell only can manage about one hundred of flying instructions due to scaling issues mentioned above).
Nobody would be surprised that some of the most respected analysts in the microprocessor industry claim that
Nvidia Denver CPU is a winner. Well I mean nobody that was not a rabid fanboy with an evident hidden agenda.
¹ E.g. Flea Flicker Multipass, using in-order pipeline, provides near 90% of the performance of an idealized 6-wide OoO core, but only consumes a small fraction of the power used by the OoO core and the logic requires between 2x and 10x less transistors. But this is not my favorite modern technique. ;-)