blazorthon :
There isn't any point in time where I can see discrete graphics getting really replaced. No GPU that is integrated as of yet can compete with anything other than very low end entry level discrete cards and there's no point in the next decade where that's expected to change, if ever. It can be described as simply a difference of how much power can be used. Discrete can use multiple hundreds of watts of power whereas integrated, at best, only gets a few dozen watts of power. What can be done with multiple hundreds of watts of power can be many times higher performance than what can be done with a few dozen watts of power at any given time.
comparing power utilization is a pretty poor metric of performance. Your argument in particular is on the surface particularly thin.
Let me taking a stab at reworking your argument in the only appropriate compute analogy.. with cars.
My sister's 1970 GTO with the 455ci option produced a fairly impressive 360 HP and a much more impressive 500ft/lb of torque. It was a hugely impressive when I was a little kid and I could barely lift my arms off the seat when my brother punched it. It was also rated at 12mpg city with no emissions controls though my sister got 5-8mpg [depending on whether my brother got hold of the keys].
fast forward several decades.. The 2006 GTO. Though we think of old muscle cars as being huge, the 2006 curb weight is around +100lbs over the 1970.
Smaller engine, 364ci engine. More HP, 400HP, less torque 400ft/lb.
2006 Performance? 0-60 nearly 1.9 seconds faster. Quarter Mile 2 seconds faster despite the weight and torque disadvantages.
Oh yea, don't forget it's loaded up with emissions equipment and is rated 15 city.
That's a long way to go, but I'm feeling chatty and I think it's a particularly good analogy because it illustrates many aspects you glossed over. Before digging in, let me point out that there's been a lot less room to grow in internal combustion efficiency between 1970 and 2006, compared to processor performance/watt between 1971 and 2006. CPU litho in 1971 was done at [I'm not exaggerating] 10,000nm. Today we're making processors at 22nm. Performance is well over 20,000x greater. As curious as I am, I'm not going to look up power consumption on 1970s era CPUs.. but I used to run code on an old IBM mainframe that needed chilled water with a roof heat-exchanger and it was probably many times slower than my phone.
OK.. I'm still going. hopefully someone enjoys the story enough to keep reading.
🙂
why the analogy is, I think, a good one...
- as time goes on, performance [compute or speed] goes up as power [gas or electrical] goes down. As technology advances, efficiency goes up.
- Complex systems benefit and are held back by all their components. How does a new car with slightly ore HP and much less torque smoke an old car that weighs less with 100+ ft/lb in 0-60? Suspension, tires, transmission, responsiveness [fuel injection vs. carb.. etc].
- and this one requires a bit of a stretch but the analogy still works.. If we scaled old tech to new tech.. would it matter? If GM put a 455ci engine with performance that scaled like the 364ci [over the old block] would it really matter in the end?
The last point is where your argument fails, but perhaps never on absolute terms.. but on the terms that matter.
Scale it up linearly and we got a 500HP motor. 364ci * 1.25 just happens to equal 455ci. Putting 500HP in a 2006 GTO isn't going to get you 3.8sec 0-60s. You won't grip. You might even blow the trans.
Look at it another way.. the top speed on the 2006 GTO was something like 167, 169mph. Scaling up to a 455ci w/ 500HP isn't going to also get you up to 210mph no matter what else you also upgrade. That one is probably more obvious.
Right now, discrete GPUs, at least the really big ones, are REALLY BIG and for that reason use lots of power. They're just one part in an even more complex system though. They seem to have an advantage in bandwidth to local ram but remember that they're still swapping data into there from the main memory. Their main choke point is the PCIe bus and the fact that it operates in a different memory address space from the main system RAM. Integrated GPUs have a massive advantage here. AMD APU's working space is on slower RAM but they can directly access the same memory addresses as the CPU. There's no swapping. The benefit is immediately obvious when you look at the level of performance you get [especially GP-GPU calc] between APU and CPU+GPU systems. A fairly small integrated core keeps up or beats a mid-sized discrete core.
Further, we can assume that both GPUs and APU graphics will both continue to scale.. perhaps even at similar rates for the foreseeable future. Will the choke-points scale though?
When APU and GPU graphics performance doubles, will PCIe bandwidth have doubled by then? If not, APUs will start to scale faster than discrete GPUs.
More interestingly, would it even matter? we're always going to be limited by the physical interface.
Even if we get 4K monitors in the next few years.. how long until there's no point in investing in faster GPUs for 99.9% of users because we're already running full-rez at 240Hz with every affect turned on. We'll get to a point, eventually, where us poorly constructed humans won't recognize the difference.. and far before that, we'll decide we won't pay a boat-load of money to barely be able to discern a difference.
OK.. finally ready to go to sleep. Just in time.
I can easily see APUs more or less obsoleting discrete graphics, tough probably not totally at any point unless integration totally trumps size minus choke.
We're already half-way there. I quoted a business-class Dell AIO for a client at work today. The only gpu options were Intel integrated. I wouldn't be at all surprised if half the machines shipping today world-wide had intel or amd integrated graphics.
But, but, but that's not good enough for gamers, right?
It's becoming enough right now for a lot of gamers. I've got a bunch of CHEAP compute nodes that I built with with Llano A8s and I just added a Trinity A10. I installed Win7 first to play with them.. they're impressive, hugely impressive from a price/performance and size/performance perspective.
A trinity A10 is around $100 and it'll run rings around a $150 CPU and a $100 graphics card.. possibly a $150 card. It's no GTX680 but how many of those are sold compared to total computer shipments?
More in line with your argument.. an A10 is a 100W chip. It'll easily smoke a 65W CPU paired with a 100W [possibly more] GPU. There's a synergy in the integration that increases performance per watt and performance per transistor when you compare APU vs. CPU+GPU.
As the APUs get faster.. you'll eventually find the GTX650 buyers saying.. why bother? Then it'll be the GTX660 market.. eventually it'll be the GTX 670 (or the comparable product in that market segment at that time).. and when that happens you're seriously biting into people who are willing so spend serious $$ on gaming.