Imagination Creates PowerVR GR6500 Low-Power Real-Time Ray Tracing GPU

Status
Not open for further replies.

knowom

Distinguished
Jan 28, 2006
782
0
18,990
I'm more than a bit skeptical of the trickle down effect, but who the hell knows. I just don't see other manufacturers adopting this unless directly nudged through competition to do so. The implications are a well enough, but still seems unrealistic.

I highly doubt AMD would have any interest in this either as they could do something similar by just making a PCIe form factor APU all in one system sort of like Intels HDMI all in one design, but more powerful.

Really if they did that coupled with crossfire bridge support and APU socket support they'd have something interesting. Perhaps standard GPU rendering with background cached ray tracing the more you play the game itself the better it gets in area's even if they only ray trace cached player models that could be a big improvement.
 

knowom

Distinguished
Jan 28, 2006
782
0
18,990
That example is not overly impressive either more complex higher resolution ones would require drastically more power both in the form of performance and efficiency.
 

New AMD GPUs don't need bridges for CFX. They use DMA to do it over PCIe.
 

Alexandru Voica

Honorable
May 4, 2013
9
0
10,510
Ray Tracing and low power? Highly unlikely....maybe in simple scenes. Yet, I have high regard for ray tracing as the future.
Have a look at this demo then captured in real time at GDC 2014:
https://www.youtube.com/watch?v=LyH4yBm6Z9g

This is real-time, low power ray tracing running on an earlier version of the PowerVR Ray Tracing architecture (i.e. not the chip described in the article) which was much slower than what we will be able to achieve with PowerVR GR6500.

Again, here's John Carmack after seeing the demo
https://twitter.com/id_aa_carmack/status/296648316937191425
 

Alexandru Voica

Honorable
May 4, 2013
9
0
10,510
How large is the chip? Will we see this integrated to Mobo's or even on GPU's potentially? This is pretty awesome.

The single-core, quad-cluster PowerVR GR6500 fits the performance, power and area budget typical for a high-end gaming tablet but can also scale to a multi-core configuration (i.e. 8-16 GPU cores of four clusters each) that would fit in an ultra-slim game console.
 
How about John Carmack?

No. Thank you. That makes me even more dubious.

I do not doubt that 'real-time ray tracing' can be accomplished by various fancy algorithms ... which drop gazillions of pixels. We have seen this debate before -- the last being 7-8 years ago when Intel was pumping Larrabee.



 

Gavin Greenwalt

Distinguished
Jul 18, 2015
7
3
18,515
I do not doubt that 'real-time ray tracing' can be accomplished by various fancy algorithms ... which drop gazillions of pixels. We have seen this debate before -- the last being 7-8 years ago when Intel was pumping Larrabee.

Why would you be surprised that dedicated hardware (Raytracing chip) can outperform a general compute approach (Cuda raytracing). I would be *shocked* if raytracing weren't an order of magnitude faster on a dedicated chip. In fact full disclosure I demo'ed Caustic's previous raytracing card and still have it installed in my computer right now. It's faster at raytracing than my Titan that sits along side of it but uses like 10 watts vs my Titan's ludicrous wattage.

Dedicated hardware is how we can do h265 on a couple watts vs dozens of watts on a GPU or CPU. A modern CPU can barely handle 4k H265 decoding while a smartphone can breeze through H265 with ease when teamed with a hardware decoder. If you want brute ray count, everybody always knew that raytracing could be done faster on a dedicated chip. The problem with previous dedicated hardware for raytracing solutions were that they almost always also used fixed-function shaders ala a GPU from the 1990s. The PowerVR OpenRL addition to the sordid history of hardware raytracing is that they found a clever way to do per-pixel shaders (up to a point) that could be efficiently teamed to a hardware raytracing chip. So it's a mostly-programmable hardware raytracing solution just like GPUs these days are mostly-programmable hardware rasterization solutions. I don't see a reason for being dubious that a Raytracing chip wouldn't run circles around a rasterizing chip at raytracing.
 

Joao Ribeiro

Reputable
Aug 8, 2014
8
0
4,510
Man, I hope this is implemented in a Desktop GPU, where we have nice "big" thermal envelopes, therefore enabling this architecture to shine in a unprecedented way, paving the way to the best graphics in a computer ever.
A nice thing would be to build a GPU around this and implement something like intel devised initially for the Pentium CPU, the x86 instruction decoding block, but make it work to assure backwards raster compatibility while enabling next gen raytracing at the same time.
 
For those being doubtful or wondering "why", this is about selling Intellectual Property rights. This is indeed legit and can easily be scaled up because this is fixed function ASIC hardware and not general purpose FPU calculations. Doing fixed function hardware enables a 10~100x increase in speed because you can do complex calculations in a handful of cycles vs hundreds, it's the difference between doing calculus in an integer unit vs a FPU. Or for a better example, your CPU is perfectly capable of rasterizing 3D scenes just like your graphics card does, the technique is known as software rendering and OpenGL implements a full engine for it. Very old games used the CPU for this, Doom, Quake and the rest of early 90's video games. The jump between CPU rendering and GPU rendering was extremely significant, and so too will be this. They want to sell the technology, not chips, to graphics card manufactures like nVidia / AMD for them to integrate into their respective products as features.

Real time ray tracing is insanely accurate, for more then any other technique we have but it comes at an equally insane performance cost that makes it prohibitive for anything not professional rendering.
 

Star Pilgrim

Honorable
Jul 19, 2015
9
3
10,510
If nVidia bought this and incorporated into its GameWorks, it would be the final nail in the AMD coffin.
If AMD bought this, it would once again be king, as it would make shadows look realistic with no CPU/GPU impact.
 
Status
Not open for further replies.