Matt1685 :
I said: Their HD graphics architecture is slow and energy inefficient, even in graphics-only modes.
You replied: "You're basing this on what, exactly?"
I'm basing it on hardware review sites such as AnandTech, PCPerspective, Guru3D, etc. They measure the performance and the power draw.
The reason I ask is that this is a bit tricky to measure. First, GPUs get more efficient with scale. If you scale down Nvidia or AMD GPUs, they'll also look much worse. More importantly, Intel's GPUs are limited to DDR3 and now DDR4, neither of which have the energy efficiency of GDDR5 or GDDR5x (as measured in GB/s per W).
Matt1685 :
"Deep learning will soon be dominated by ASICs."
Maybe. That's yet to be determined. Certainly in the inference portion, that's probably true. Training-wise, it would take a long time. There's a whole lot of software out there optimized for GPUs as far as deep learning training goes.
Most software using deep learning is built atop one of the frameworks like Caffe, TensorFlow, etc. Using different hardware is easy, if the hardware vendor supports the framework you're using.
Matt1685 :
Not sure why this has become an anti-NVIDIA rant as far as you are concerned. Our discussion wasn't about NVIDIA. In fact, I think it's strange that you first say Intel is going to develop a GPU and beat NVIDIA and now you say that GPUs are doomed. You've lost the plot in your argumentative zeal.
Anyone who might happen to read this can decide for themselves who lost the plot. The reason we got here is that I claimed that if Intel were serious about tackling GPUs, they'd just scale up their HD Graphics architecture. You seem to have some sort of anti-Intel bias, because you're refusing to accept that it's competent for its size & memory limitations.
In GPU-compute tests, Intel's GPUs hold their own against AMD's APUs. As far as I'm concerned, that evidence enough that the architecture has potential.
Matt1685 :
"And the CPU-integrated market is currently limited by memory bottlenecks that are soon to be alleviated."
The CPU market is limited be the death of Denard Scaling. Faster memory isn't going to reverse the need for parallelization to continue performance improvements.
You missed an important word. I said "CPU-integrated", which was meant to imply graphics. I think we'll see AMD offering APUs with HBM2 and substantially more powerful integrated graphics, in the near future. Perhaps Intel will beat them to the punch.
Matt1685 :
You may have started with no dog in the fight, but you obviously took one along the way.
No, I still have no dog in the fight of Intel vs. Nvidia. All my recent CPUs are Intel and my latest GPU is Nvidia. I like AMD's commitment to open standards, but I'm a pragmatist and pick the best product for my needs.
Matt1685 :
somewhere along the way you got locked into arguing against it because you didn't want to consider that something you said about NVIDIA and SIMD might not be accurate.
If I have an issue, it's with being accused of making misleading statements, simply because they don't line up with Nvidia's marketing spin, by someone who's clearly no expert in SIMD or GPU programming. If you see a discrepancy between what someone is saying and other information on the topic, it's fine to inquire about it. But please save the accusations for when you
really know what the heck you're talking about.
If you say so.