Is FG good or bad, then? That's the real question, and I'd put the answer somewhere in between. It's interesting, and can make games look smoother. But it never (in my experience) really makes them feel more responsive.
I'm all about use cases. This type of technology would be great for prerendered scenes running at low FPS, like a movie at the NTSC rate of 29.97 FPS, or some sort of long CGI animation or cut scene. Having all the material already rendered ahead of time means there won't be any timing issues and should be minimal artifact issues. A video game OTOH needs to be responsive to the users input and frames need to hit the screen pretty quickly which is why we avoid triple buffering, and this essentially requires it. As discussed it only really works at high FPS, where it's not needed, at low FPS it makes the experience even worse.
Worse is that it artificially inflates benchmarks by incrementing the FPS counter on those pseudo frames. How would you react if someone claimed a 50~100% "performance improvement" by turning on motion blur? This can lead people to make purchases they wouldn't of otherwise made.