Next-Gen 3D Rendering Technology: Voxel Ray Casting

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Adroid

Distinguished
Great Article. Its an interesting point that about 10 years ago the whole market went to polygons instead of voxels. Who knows where we would be if the industry went that direction. Nowadays processors are becoming more and more capable of such a feat.

I think the sweet spot is finding a mix of both... Maximizing both processor and GPU use in games. Its a heck of alot of programming, but thats why they get paid the big bucks to make games we love right? :D
 

Kelavarus

Distinguished
Sep 7, 2009
510
0
18,980
Odd. This seems more like a completely different tangent than raycasting, not really in the same vein. Somewhat, but not enough to throw one out the door because of the other.
 
G

Guest

Guest
The 122 number is correct. First of all, the second image uses a 16 by 16 grid size for the smallest squares compared to the first image of a 12 by 12 grid. The squares occupied by the circle contains 50 of the smallest squares and 13 of the larger squares.
If the 13 squares are instead the size of the smaller squares, that would be 13 * 4 = 52, for a gand total of 122 of the smaller squares.
 

matt87_50

Distinguished
Mar 23, 2009
1,150
0
19,280
sure this system is faster than ray tracing... as long as you don't want any of the benefits of ray tracing...
secondary rays and the ability to render higher order primitives are the things that make ray tracing look better!

this may be better than rasterization, but it can't be compared to raytracing...

if you ask me, dynamic geometry is already hard enough, without that sort of limitation being introduced. would be cool to see it work side by side with rasterization tho.
 

pottuvoi

Distinguished
Oct 22, 2009
6
0
18,510
Why it cannot be compared to a ray-tracing?
Only difference to 'traditional' or polygonal ray-tracing is that the acceleration structure is your geometry, there is nothing that says you cannot shoot secondary rays.

If you store alpha for voxels you can get some nice benefits from pre-filterable geometry.
Which gives some nice advantages, like cheap AA and blurry reflections and shadows. (less ray traversal the blurrier they get.)
 

ptroen

Distinguished
Apr 1, 2009
90
0
18,630
Sounds like a lot of jumps which means you would probably do alot of processing on the cpu for this method(more cache/memory). The cpu of course means you have latency to transmit over the frame of the bus which isn't so bad but one thing I should mention is that then you would want your physics on the gpu and use the gpu less for the actual computation since it's not really good for voxels in the way carmack is describing.
I mean point sprites are ok but if you are going to do the the huge traversal that 1 terahertz potential of the gpu is going to go down the cpu level in performance due to many cache penalties. Yup alot of jumps.

As for other rendering methods I don't think their is a clear winner yet but I wouldn't dismiss alternative methods to voxels like ray tracing or radiosity because of the problem of secondary rays. Regardless of which method you construct your going to have to deal with the jumps and use PARRALISM to get a good image. Also good image hacks are a must since the decade of T&L pipeline has given alot of convenient hardware features.

Cheers,
 

rcmaniac25

Distinguished
Jul 20, 2009
64
0
18,630
I think it's a good stepping stone but that ray-tracing is still going to be the ultimate graphical end point. Ray-tracing mimics the way light travels in the real world.
In order to achieve perfect images you need to combine technologies; rasterization, polygons, patches, ray-tracing, procedural models/textures, ray-casting, voxels, and volumetric items. All are needed for some item at one point or another.
Far off/low detail objects: rasterization, simple flashlight-like lighting: ray-casting, clouds/dust/etc.: volumetric (it is possible to use equations instead of models to dynamically generate and render animated, volumetric items [not real detailed items but enough detail to pass in most games]), etc. Carmack is an incredible programmer and will probably come up with some way to not only make voxels a stepping stone but also to get it even closer to extremely detailed volumetric rendering. Interested in what Carmack and company will come out with in id Tech 6 (whenever that happens since 5 didn’t come out yet).
 

ajsellaroli

Distinguished
Dec 14, 2007
297
0
18,780
I really don't understand why some games with bad graphics (GTA IV) get so many less fps than games with awesome graphics (COD4, L4D, TF2).

Seriously!! GTA IV's graphics are butt ugly, but the game get horrible fps with my radeon 3870, while I can play COD4, L4D, and TF2 flawlessly @ 1920x1200 resolution, and they look so much better (GTA IV's shadow's SUCK).
 

ajsellaroli

Distinguished
Dec 14, 2007
297
0
18,780
I really don't understand why some games with bad graphics (GTA IV) get so many less fps than games with awesome graphics (COD4, L4D, TF2).

Seriously!! GTA IV's graphics are butt ugly, but the game get horrible fps with my radeon 3870, while I can play COD4, L4D, and TF2 flawlessly @ 1920x1200 resolution, and they look so much better (GTA IV's shadow's SUCK).
 

mikenygmail

Distinguished
Aug 29, 2009
362
0
18,780
Geez, where is the October "Best Graphics Card For The Money" article? We were assured in writing that the articles would be available sooner, more towards the beginning of the month and here we are on October 26th with no article. The last 3 articles were released on July 21st, August 24th and then September 10th - this last date providing hope that the October article would be released on the 1st, but almost 4 weeks later there is no article at all.
 

pottuvoi

Distinguished
Oct 22, 2009
6
0
18,510
Actually the old good bumpmapping and normalmapping do the same thing from different source textures.
Both change only a direction of normal on the surface to create a feeling of more complex lighting.

In old days this was done with heightmap, but this was changed when normalmapping was introduced. (which has normal information for that point, in either tangent, object or world space. (or whatever other space one might come up with.;)))

All other methods like parallax mapping, relieftexturing, cone step maps,POM are methods for per-pixel displacement mapping and they do only one thing, find a proper texture coordinates for current pixel, and do this by raycasting.
They have absolutely nothing to do with normal or bump mapping and normal mapping is still used to get proper shading information for that point.

Actually in voxel ray-tracing the normal mapping is still used to have information of normal directions for each voxel, so lighting is still absolutely same as it is with polygons and thus techniques can be easily mixed.
 

FloKid

Distinguished
Aug 2, 2006
416
0
18,780
Hmmm cool. I haven't finished reading the article, but I got an idea, I like figuring things out by my self... Tell me if I am right, going to sleep now. Say a simple example. You have a highly tessalated, sphere that you want to display smoothly. Fit a cube to it and subdivide it in 3 dimensions... so now you have 4 cubes (inside 1 cube) now use the same algorithm as u did in 2D. So eventually you end up with a huge voxel in the middle of the sphere, and many tiny ones on the edges of the sphere... Then you use something like raycasting or I am thinking culling to get rid of the voxels that are completely behind the ones that the player can see. Did I do good so far? I'll read the rest of the article tomorrow, I'm learning, I'm learning...
 

pottuvoi

Distinguished
Oct 22, 2009
6
0
18,510
Close, you raycast the cube structure until you hit cube which is marked as visible.
Then you get color, normal etc. information from it and render the pixel.

There is no need to cull anything as you know automagically which voxel you hit first.
 
G

Guest

Guest
You might wish to visit Atomontage Engine homepage, the engine is a voxel-based engine, has its own physics, supports destructions, some basic effects, etc. And works in real-time on mainstream PCs. Content can be mixed with vector data (the usual old-style things like polygons, collision primitives..).My personal estimation is that since 2003-2004 voxel engines could have started to be dominant on the market. Bad thing that too few people were really interested in voxels after the acceleration boom in late ninetees.So today is definitely a good day to develop voxel-based games :)
 

twile

Distinguished
Apr 28, 2006
177
0
18,680
Having taken graphics courses and done ray tracing, I saw the limitations of this a long way off. Even in ray tracing you really have to use an octree, otherwise a scene with 10,000 triangles takes 10 times longer to render than a scene with 1,000 triangles because it has to check every ray against every triangle.

However, that doesn't mean that a scene has to be static. You can have objects moving around, it just means that the tree can become unbalanced and start to slow down. Alternately, you can rebuild the tree every time you're going to re-render, sorting millions of objects in 3D space. This can be done dozens of times on a GPU for fairly simple scenes (read: dozens of frames per second). However, I don't think that would play too nicely with the sort of MegaTexture voxel equivalent they're talking about, gigabytes of voxel octrees stored and streamed as needed. Certainly, for optical media like on consoles this would be a no-go (though with any luck, by the time voxel octrees are viable, even consoles will use hard drives to store all game data).

I think the idea is very cool, and certainly a fun mental puzzle, but at the expense of saying something which will make me look stubborn and short-sighted 10 years from now, I can't see how it's a real improvement over what we have today. Triangle rastarization is fast. If this sort of voxel ray casting doesn't give us improved interactivity and destructability, and it doesn't do the secondary ray casting of ray tracing to yield better effects, I don't see why it's an appealing alternative in the slightest. Perhaps if they used voxels to simulate object and environment destruction, and made triangle meshes on the fly to cover the newly created surfaces, they could be useful for something. Or maybe I'm just missing the whole point of this rendering approach.
 
Status
Not open for further replies.