I really thought they were doing that before... It's not as if it's a new idea!
For those of you not in the know, X11 is the protocol used to draw GUIs under UNIX OSes (apart from Mac OS X). Its 'legacy' acceleration method, called XAA, dates back to the 1990's. It has been mostly replaced with EXA (which has been extended to make use of a 3D engine to accelerate 2D operations). Intel has taken EXA and removed the part that requires CPU copies and created UXA based off EXA.
By reading
Keith Packard's blog, it seems that the UNIX community (at least under BSD and Linux, but probably Solaris and others too) has been saving RAM like that for a looong while (see his description of how EXA manages pixmap writes) and has KNOWN the issue with such a system: when the CPU has to read back from VRAM, it is horribly expensive.
So, this system is viable only if:
- you have a VERY fast PCIe card, with a very fast bus
- you have TRUCKLOADS of VRAM, so as to never need to move pixmaps back and forth
- you have dynamically allocated RAM used as VRAM
And it depends on:
- having a very efficient RAM/VRAM allocator, that can reduce movements between VRAM and RAM as efficiently as possible.
Look at the graph above: opening a window eats 40 Mb. Open a dozen: there, 500 Mb of VRAM allocated.
But you have only 256 Mb of VRAM...
Well, it means that the OS has to perform pixmap swaps. These can happen on something as stupid as a glyph (a text element). Imagine that the VRAM allocator hasn't freed enough RAM to hold a glyph set (and a glyph isn't a single byte: it's a full matrix of dots, of which we only see a filtered subsample on screen due to kerning, hinting and such shit, and it can't be stored in VRAM as a path), and you type fast.
Time travel back to the 1990's, you'll see the glyphs draw themselves on screen as you type.