TItan benchmarks are out.

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Yep... here we are Anik8:

"We’re naturally happy to see GK110 bring an emphasis back onto compute. However, there’s no question that GeForce GTX Titan’s ability to cut through real-time graphics is top priority. In order to balance that 75% increase in shader and texture unit count, Nvidia also bolsters the GPU’s back-end. GK104’s four ROP partitions are able to output eight 32-bit integer pixels per clock, adding up to what the company calls 32 ROP units. GK110 leverages six of those blocks, increasing that number to 48.

Both the GeForce GTX 680 and Titan employ GDDR5 memory running at 1,502 MHz. But because GK110 features six 64-bit memory interfaces, rather than GK104’s four, peak bandwidth increases 50% from 192 GB/s to 288 GB/s. That matches AMD’s reference Radeon HD 7970 GHz Edition card, which also sports 1,500 MHz GDDR5 on a 384-bit bus."
 
LL
 


Almost certain to improve with driver tweaks, but I had read a couple
of times in recent month's, that this gneration of cards from
both NVidia & AMD would have less performance increase over
the prev series, than the prev series had over it's predecessor.

Don't know how true that is, but seems to be starting that way.
 

Definitely wrong if you go by actual product generations (ie. the Geforce 600 series vs. the 500 series and the 400 series, or the Radeon HD 7000 series vs. the 6000 series and the 5000 series).
 

I wanted to see the same fluid simulation mandelbulb and image editing compute based benchmarks that toms showed when comparing 7970 with 680.I mean we should keep a suite for compute which we can co-relate with the previous articles.