Some time ago I predicted that discrete graphics cards will be killed by 2020 more or less. I shared Nvidia labs quote confirming that they expect the same. AMD's last Papermaster talk was about achieving 25x increase in APUs efficiency by year 2020 (i.e. ~20TFLOPs APUs). Discrete graphics cards were not mentioned during his talk.
AMD is transforming, as everything else, into a SoC company. During last re-organization the CPU division was subsumed inside the APU division.
Now Linus Towalds also agrees on that dGPUs will be killed.
My arguments were centered about the physical inability of an ancient CPU+dGPU to scale up to match a high-end APU due to nonlinear scaling of silicon expected for 2018--2020.
A [ 75W CPU + 225W dGPU ] on 10nm will be about 8--10x slower than a 300W APU on same node, due to the energy wall problem, as every engineer (AMD, Intel, Nvidia...) knows.
I also mentioned economic aspects why discrete GPUs are not viable. Linus arguments are focused in this aspect and are worth mentioning them.
http://www.realworldtech.com/forum/?threadid=141700&curpostid=141714
I like the last line. Game developers also agree on that discrete GPU's are a historical artifact and they are inferior technology.
AMD is transforming, as everything else, into a SoC company. During last re-organization the CPU division was subsumed inside the APU division.
Now Linus Towalds also agrees on that dGPUs will be killed.
My arguments were centered about the physical inability of an ancient CPU+dGPU to scale up to match a high-end APU due to nonlinear scaling of silicon expected for 2018--2020.
A [ 75W CPU + 225W dGPU ] on 10nm will be about 8--10x slower than a 300W APU on same node, due to the energy wall problem, as every engineer (AMD, Intel, Nvidia...) knows.
I also mentioned economic aspects why discrete GPUs are not viable. Linus arguments are focused in this aspect and are worth mentioning them.
Do you still believe that discrete GPU's have a future?
What do you base that ludicrous belief on? Drugs?
Because everything says that IGP's are getting to be "good enough" for a big enough swath of the market (and that very much includes most gamers - look at the game consoles, for chrissake! You are aware that modern game consoles are IGP's, right?) that the discrete GPU model isn't financially viable in the long run.
So your argument is exactly the wrong way around. It's not that the IGP's can't have a adequate market size, it's the discrete GPU's that have market size problems.
And the IGP's are very much moving in the direction of the GPU being more of an general accelerator (AMD calls the combination "APU"s, obviously). And one of the big advantages of integration (apart from just the traditional advantages of fewer chips etc) is that it makes it much easier to share cache hierarchies and be much more tightly coupled at a software level too. Sharing the virtual address space between GPU and CPU threads means less need for copying, and cache coherency makes a lot of things easier and more likely to work well.
We've seen this before, outside of graphics. Sure, you can use MPI on a cluster, and get great performance for some very specific loads. But ask yourself why everybody ends up wanting SMP in the end anyway. The cluster people were simply wrong when they tried to convince people how hardware cache coherency is too expensive. It's just too complicated to come up with efficient programming in a cluster environment.
The exact same is true in GPU's too. People have spent tons of effort into working around the cluster problems, and lots of the graphical libraries and interfaces (think OpenGL) are basically the equivalent of MPI. But look at the direction the industry is actually going: thanks to integration it actually starts making sense to look at tighter couplings not just on a hardware level, but on a software level. Which is why you see all the vendors starting to bring out their "close to metal" models - when you can do memory allocations that "just work" for both the CPU and the GPU, and can pass pointers around, the whole model changes.
And it changes for the better. It's more efficient.
Discrete GPU's are a historical artifact. They're going away. They are inferior technology, and there isn't a big enough market to support them.
Linus
What do you base that ludicrous belief on? Drugs?
Because everything says that IGP's are getting to be "good enough" for a big enough swath of the market (and that very much includes most gamers - look at the game consoles, for chrissake! You are aware that modern game consoles are IGP's, right?) that the discrete GPU model isn't financially viable in the long run.
So your argument is exactly the wrong way around. It's not that the IGP's can't have a adequate market size, it's the discrete GPU's that have market size problems.
And the IGP's are very much moving in the direction of the GPU being more of an general accelerator (AMD calls the combination "APU"s, obviously). And one of the big advantages of integration (apart from just the traditional advantages of fewer chips etc) is that it makes it much easier to share cache hierarchies and be much more tightly coupled at a software level too. Sharing the virtual address space between GPU and CPU threads means less need for copying, and cache coherency makes a lot of things easier and more likely to work well.
We've seen this before, outside of graphics. Sure, you can use MPI on a cluster, and get great performance for some very specific loads. But ask yourself why everybody ends up wanting SMP in the end anyway. The cluster people were simply wrong when they tried to convince people how hardware cache coherency is too expensive. It's just too complicated to come up with efficient programming in a cluster environment.
The exact same is true in GPU's too. People have spent tons of effort into working around the cluster problems, and lots of the graphical libraries and interfaces (think OpenGL) are basically the equivalent of MPI. But look at the direction the industry is actually going: thanks to integration it actually starts making sense to look at tighter couplings not just on a hardware level, but on a software level. Which is why you see all the vendors starting to bring out their "close to metal" models - when you can do memory allocations that "just work" for both the CPU and the GPU, and can pass pointers around, the whole model changes.
And it changes for the better. It's more efficient.
Discrete GPU's are a historical artifact. They're going away. They are inferior technology, and there isn't a big enough market to support them.
Linus
http://www.realworldtech.com/forum/?threadid=141700&curpostid=141714
I like the last line. Game developers also agree on that discrete GPU's are a historical artifact and they are inferior technology.