RTPU: The Next Step in Graphics Rendering

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Wow this is kind of a break through and yes it will be used in games eventually. But before this it was years down the road because of the high cost. Intel used 24 cores (quad socket 6 core processors @ 2.66Ghz) to produce their quake wars demo with ray-tracing. With this card they maybe able to down size to a single octo core machine. It would be really interesting if AMD picked this up their complete platform could actually come about. I also wonder if this is that larrabee ace in the hole?
This would also be nice for nvidia just contiuning their lead in the discrete graphics card market.
But i assure we will see this company pop up again real soon with something big
 
Those scenes don't look like a hollywood movie... so who cares if you can do it at 4 frames per second. Mediocre is still mediocre even if done quickly.

You would not do hollywood rendering at 4fps. You would rendering 1 frame can take hours, this could theoretically shorten that to minutes. It's not a frame of a game, it's one frame of film.

This link will explain further what it does.

Essentially they will be focusing on developers and render farms. Attempting to cut down the hours of rendering that computers at these places do. The faster they can produce something the better as time is money, and the more effort they can put into making each frame look better.
 
[citation][nom]kittle[/nom]almost an infinite number[/citation]

Now there's a brilliant oximoron if ever I saw one.
No number is so big that you "may as well call it infinite", that's the whole point of infinite. If it had an infinite number of polygons it would take an infinite amount of time to render.
 
Promising, but still hot air at the moment, these things are way above the consumer level, if you think Cry-games are photorealistic think again. Hopefully Caucus ends up as another Ageia but for the Ray Tracing part, so we have some 'additional' incentives to upgrade our NVidia Inside, sorry ATI, your 3Dfx-esque comment on GPGPU snubs yourself.
 
The fact that it's called "Caustic" should tell you it's not for games (caustics are the properties of light to bounce within an object, like a bottle, or a diamond). I doubt we're going to see games with caustics in the next 10 years.
 
Innovation is always good on my book but only when it gets an endorsement from a graphics industry corporation will it be pushed to mainstream.
This added realism sounds very nice but right now the game industry needs good games instead of more realism I think.
 
Considering the image with what appears to be two processors and some other circuitry being cooled like this, I'm guessing this comes close to the 75W power limit of the pcie connector. So it's not all that likely that it'll be implemented on a gpu's board directly until the die's shrunk to those 28nm amd was dreaming about the other day. But I suppose none of the big players want to buy the company just yet, waiting until it's proven that it works. It may very well be worth the 4K if the performance increase is genuine though. I'm sure a dreamworks farm is expensive to run, and if a sub 75W item can improve the performance by just 25% of what is claimed, it's investment might still cost less than the savings in processing time/power draw
 
You would not do hollywood rendering at 4fps. You would rendering 1 frame can take hours, this could theoretically shorten that to minutes. It's not a frame of a game, it's one frame of film.

This link will explain further what it does.(http://www.anandtech.com/video/showdoc.aspx?i=3549)

Essentially they will be focusing on developers and render farms. Attempting to cut down the hours of rendering that computers at these places do. The faster they can produce something the better as time is money, and the more effort they can put into making each frame look better.

To add to The Face's useful link in understanding why Caustic One is specifically for the high end read this (http://arstechnica.com/hardware/news/2009/04/caustic-graphics-launches-real-time-ray-tracing-platform.ars)

 
someone please tell me why this couldn't already be done with nVidia's CUDA or AMD Stream technologies? I run folding@home on my nVidia cards that are far superior to standard CPU compilations. just recompile the ray tracing apps to utilize the GPU's powers in parallel processing.

I actually remember reading articles with Carmack after Doom 3 was launched that they were looking into ray tracing as the next viable technology for game rendering.
 
[citation][nom]industrial_zman[/nom]someone please tell me why this couldn't already be done with nVidia's CUDA or AMD Stream technologies? I run folding@home on my nVidia cards that are far superior to standard CPU compilations. just recompile the ray tracing apps to utilize the GPU's powers in parallel processing.I actually remember reading articles with Carmack after Doom 3 was launched that they were looking into ray tracing as the next viable technology for game rendering.[/citation]

It has to do with the level of detail that goes into the raytracing. How many photons are being used, how many bounces they take. Then you talk about Caustics, which is light bouncing inside of an object (like the inside of a bottle or a diamond). This is meant to be able to do all of these things more quickly than any other CPU. It isn't meant for real-time rendering.
 
[citation][nom]Tindytim[/nom] It isn't meant for real-time rendering.[/citation]I wasn't suggesting real time rendering, I've been ray tracing for over 15 years now using POV-RAY and I know how much more complex the calculations are. it's just the fact that we already have processors available to do this work inside most workstations. I bet if you went to Pixar or Dreamworks studios and take a peek at their workstations, you will probably see nVidia Quatro or ATI FireGL cards already in them. and all that power going to waste when they could render a sample scene on their own workstation with out sending it out to the farm.
 
Caustic Graphics has developed a set of software algorithms that allows for smarter ray processing.

When you talk about CPU's and GPU's they both are powerful in their own rights, but what may take one processor a single cycle could take another 10 cycles. Although Caustic Graphics is keeping the specs of this, I guarantee this processor is tailored to the algorithms they created, as to be able to complete them in the most efficient manner.

I suppose you could use CUDA with the software algorithms to decrease render times, but I doubt they would reach the same performance as this tailor made processor.
According to James, modern processors on the market today do not offer the right kind of controls or instructions to efficiently implement the company’s designs. They used the example of an enterprise level (hundreds of Gigabit connections) networking switch to elaborate: could an Intel Core i7 processor be used in that case? Maybe but the multiple order of magnitude loss in efficiency completely destroys the cost model. What makes the efficiency argument complete for Caustic is the claim that their ASIC will only use 20 watts of power per RTPU compared to the 200 watts of a high-end graphics card.
 
It may be too little too late, the concept of a dedicated processor to accelerate raytracing is something that should have existed a long time ago due to the fact that is so time consuming for a general purpose CPU but it just didn't happen in the mainstream.

This is not the first dedicated hardware used to do raytracing. There have been expensive high end boxes of specialized hardware to accelerate raytracing before for many years and that stuff isn't mainstream yet, the different thing here may be that the chip itself is specifically designed to accelerate raytracing but I wonder if it will be enough cause they are talking about 20x speed increases and already raytracing and rendering programs adapted to GPU computing such as CUDA or Open CL are offering acceleration factors of 20x or even much more over CPU rendering depending on the rendering engine.

At that price is a product that is too niche and it will target people that use rendering farms precisely but because of the gigantic evolution of the GPUs as of lately they may make the concept of the rendering farms itself disappear in the near future (it could threaten it if it isn't starting to do so already)so this product may be short lived. It is possible that GPUs will increase in power and completely absorb the functionality that this chip is offering, as a matter of fact that is alredy happening at the moment.

Rendering software companies has been adapting their rendering programs for GPU computing for several years now and the result of that is starting to appear on the market at the moment and the problem is that all that rendering software has to be adapted again to support this alternate technology so why would software companies will adapt their software for this chip that is so niche and which future is so uncertain when they can do the same for GPU acceleration with similar performance gains in products that are already entering the mainstream? When things like CUDA and Open CL are things that have already started to gain acceptance in the public and becoming widely accepted standards?

For this company to try to increase the performance of this expensive chips to compete with the performance gains in the same area that GPUs are going to have for hardware that is bough by so many more people is going to be tough cause those chips are already being sold by the truckloads compared to these chips and the cash power that companies like ATI-AMD and Nvidia are having with this may allow them evolve their GPUs faster and to surpass the acceleration that this chip offers and stay ahead of it. I think that this product will have a though battle ahead. Only time will tell.
 
@PixelOz

I doubt you'll see competition. The chip maker mentioned the unit only taking 20 watts, so I think it's more likely you'd go for a GPU with a Raytracing co-processor on board. We could see the Radeon 6870 with a lean Caustics ray tracing co-proccessor for realtime ray processing. The workstation cards could easily have a more beefy RPU (Ray Processing Unit) for more accurate less time sensitive calculations.
 
But you are talking about two different chips now, I though that this was touted as a high en pre-rendered graphics accelerator chip and that type of functionality has its merits in its own rights but accelerating raytracing for real-time and for pre-rendered graphics are two different things.

I'm not talking about the real-time 3D graphics performed regularly today by 3D graphics cards that are used nowadays on games of course cause I know that those do not use raytracing for that at all. What I mean is if this is related to newer raytracing algorithms that I hear about over the net that were oriented more toward real-time graphics if that is entirely possible.

The thing that I do not understand very well is why are we talking here about real-time raytracing when only a 20X speed increase is mentioned. A 20x speed increase is nowhere near what you would need to perform real-time raytracing because if for example a HD image that takes me 20 minutes to render in a 3D modeling program (pretty common) is increased 20X times it will still take 1 minute to render and that is a single frame and we need to render about 60 per second for really smooth animation so how does this work? Unless we are talking about those newer raytracing algorithms that are supposed to be designed for real-time 3D graphics being accelerated by this type of chip cause that is the only way it would be possible.

Is that what you mean by two different chips? One designed for games for the low to medium end game graphic cards and another more expensive one designed for let's say Quadro workstation type graphic cards?

If that is the case we could be talking about two different algorithms because those raytracing algorithms designed for real-time graphics have to have some compromise of one type or the other compared to the ones that will be used for pre-rendered graphics in the higher end chips and cards. Is this technology capable of handling these two dissimilar methods? Or have they developed a method that can be adapted for both uses? What is the deal with this?

And you mention also that you think that it will not compete but how? After all, current workstation type graphic cards are already offering CUDA acceleration just like the game cards and those are high end graphic cards. I mean like I mentioned already 20x raytracing acceleration speeds and even much more than that are already achievable by newer GPU computing capable 3D game cards depending on the raytracing method used and those cards cost nowhere near the $4000.00 price tag.

So why would I pay $4000.00 for a raytracing accelerator that offers me no more raytracing acceleration than what I can obtain in a regular $200.00 graphic card? How does that doesn't compete? That is already available in programs like Hypershot which use newer rendering methods to achieve a lot of speed even with CPU alone and the newer version can now take advantage of CUDA to use both the CPU and the GPU power available in the machine to accelerate even more.

I can understand that a chip specializing in raytracing could be even faster than a GPU in processing such but if that is the case it will have to be significantly faster and not just a bit faster to be able to compete or otherwise who is going to pay $4000.00 or any other amount of money for power that would be achievable through a regular GPU? and leave alone a multiple GPU setup.

So let's say that I put 4 $400.00 graphics cards running in SLI mode (and newer motherboards can do 4 way SLI already as far as I know). If I do that I spend $1,600.00 in graphic cards and if a single GPU already achieves a 20x increase in raytracing performance or more wouldn't that setup accelerate such a raytracing program much more? and that is still $2400.00 dollars short of that RPU card. So what is the deal with this?

I understand that this is first generation of this technology but by the time that the next generation of the technology is here we could also have faster GPUs that again could offer competing performance increases. So how does that doesn't compete?
 
Compare this to what happened to the Ageia Phys-X card technology. Isn't that exactly what happened to this technology? I mean this type of acceleration was originally sold as a proprietary board that was plugged inside the PC in a similar fashion to a graphic card or a sound card but then Nvidia bough the company and was able to take those physics acceleration algorithms and run them through their GPUs at very high speeds.

Couldn't a similar thing happen to this technology in the near future? What if Nvidia or ATI extended the abilities of their GPUs to offer a similar capability that could compete directly with this card?
 
Status
Not open for further replies.