How about John Carmack?If it seems too good to be true ... it likely is.
I wish PowerVR well in their endeavors but will reserve judgement until 3rd party verification and 'qualifications' and/or limits ...
Have a look at this demo then captured in real time at GDC 2014:Ray Tracing and low power? Highly unlikely....maybe in simple scenes. Yet, I have high regard for ray tracing as the future.
The single-core, quad-cluster PowerVR GR6500 fits the performance, power and area budget typical for a high-end gaming tablet but can also scale to a multi-core configuration (i.e. 8-16 GPU cores of four clusters each) that would fit in an ultra-slim game console.How large is the chip? Will we see this integrated to Mobo's or even on GPU's potentially? This is pretty awesome.
No. Thank you. That makes me even more dubious.How about John Carmack?
Why would you be surprised that dedicated hardware (Raytracing chip) can outperform a general compute approach (Cuda raytracing). I would be *shocked* if raytracing weren't an order of magnitude faster on a dedicated chip. In fact full disclosure I demo'ed Caustic's previous raytracing card and still have it installed in my computer right now. It's faster at raytracing than my Titan that sits along side of it but uses like 10 watts vs my Titan's ludicrous wattage.I do not doubt that 'real-time ray tracing' can be accomplished by various fancy algorithms ... which drop gazillions of pixels. We have seen this debate before -- the last being 7-8 years ago when Intel was pumping Larrabee.