OpenGL 3 & DirectX 11: The War Is Over

Page 5 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Note, you could put a object into a GPU, think of an object as a struct with pointers, you call one function, it calls other functions.. Each function could be a thread, that works on a different buffer, one a perlin function making a cloud texture, another to compute the reflection mapping, another to apply the reflection mapping results to the normal map made by the perlin function..

See the problem with the ubber shader is that to apply the functions in a different order, you'd need to make a different permutation of the function or add this specification to the existing function..

The problem with function mapping is that to add more functionality you have to add more arguments, and for any arguments not used you have to put in nulls or spaces. So your simple texture has just become a complex mess of passed in structs, pointers and magic cookies. Maybe for a simple function, this is not so bad, but a ubber shader could result in ubber spaghetti code.

Maybe we should adopt Lisp on a shader level? At least we could make functions that make functions. Like say "apply reflection shader effect to diffuse shader" to make reflective plastic shader.
 
BTW, you can't use preprocessor macros in a function like a conditional in the language.. Macros are processed before the function is compiled.. I know it was just an analogy, but I really wish you guys would extrapolate the ideas of what other problems this design method would reveal.. It seems that API designers can only think incrementally, than to think outside of the box. I have an idea, how about opening the sources and open it up to those who can.
 
On a low level Assembly is Lisp.. It's a language capable of self modification.. Whereas C is not capable of self modification.. When a C function calls another C function, the state of the CPU has to be placed on a stack before calling the other function.. But if C could combine the two functions into a new function, there wouldn't be time lost to context switching, it would run inline (easily threaded because the resulting function is the result of two or more functions being combined). And very easy to embed in an inline machine-level function (or shader). Why have a symbol table, put it into the function as temporary registers. Using a compilable language to utilize features of an interpretive language to make inline threadable machine code, wouldn't that be great?
 
example of lisp

define func perlin_noise2D ( x y ) ...
define func refmap ( f u v fn ) ...
define func image_map ( lf x y ) ...
define func load_image ( name x y ) ...
define func normal_func ( f x y ) ...

okay say you need a ref map applied depending on on the normals extracted
from a image.. You can pass the normal_func as a normal to refmap, pass
the image map with the name of the image to load as the argument "f".. The result of the call will be a partially processed function that is only lacking X and Y coordinates.. The result of which is a color, that is the result of the perturbation of a normal computed by a normal perturbation function that uses the image for input. Note that since the result is a function, it can be reduced to assembly code on the fly and cached on the graphics card to be used or used as part in another function.. Now this isn't a good example of something that can be threaded, but if you used the result of the image map function to apply to R G and B components of the image, you could thread that.
 
Actually, PS3 uses openGL ( 2.0 RSX, and ES2.0 the CELL ), and games like Killzone, with this hardware, are more realistic and good-appeal than directx 10 games for PC.
 
and look how long killzone is taking. Its to the point that people have high expectations of this game. And also the ps3 is not quite opengl. Its also using a in house proprietary api from Sony. When you install Linux on your PS3 it does not have direct access to the graphics hardware.
 
I'm not going to pretend to understand this completely, but I've worked on some 3d games/etc on OGRE/Ogl/D3d and I have to say that I agree with most of the people here (doesn't it seem strange that all the long, constructed responses seem to agree with each other?) But I have to say about the Mac/Linux/Win thing, if one OS makes you happy, USE IT. The end. I like Linux for it's speed and customisability and fulfilling my need to tinker. I like mac (*cough* hackintosh *couch*) for it's ease of use, but the hardware support is laughable (like Linux, but less so except in GPU) and the "mac" hardware is overpriced. I like windows because it supports everything and I can get a AMD PII 940 w/4850 or Q6600 w/4850 for like $700 and I don't have any problems with it, it just works (maybe not for some people). In the end, I will always have one computer per OS for all my computing needs.
 
By the way, I use my Windows mainly for gaming and Just because PC has no market doesn't mean it is the worst. A LOT of the console market is people less than age 14, and (I asked the "typical" American Schoolboy and in their opinion [as far as I can derive from him] PC have crappy graphics and are overly expensive. Oh well. 1. they are not expensive, just plop a $130 MAXIMUM graphics card into any fairly modern computer (P4 would work, even though a lot of people have c2d's) and away you go! I know, I can't expect people to install their own graphics card, even though it is like the easiest thing in the world.
 
Status
Not open for further replies.