G
Guest
Guest
Archived from groups: comp.ai.games,comp.games.development.programming.misc,comp.lang.java.advocacy,comp.theory (More info?)
Hi today is my birthday, and because I'm alone, I though I'd create this
article.
Basically it's not suposed to be trolling, but a proposal to a lengthly
discussion.
This article is IMO on topic in java.advocacy, ai.games, and
games.programming.misc. I'm unsure if it's on topic in the comp.theory.
I listened sentences like "Don't do it too complex", "Simple function is
better than complex one", "Don't bother with anything computationally
expensive, it would unnecessary drop speed of your program", "It would be
hard to maintain", "Simple loop would do", "It's just a game not
scientific application".
After I heavily programmed over 3 years, I think all of them were wrong.
Actually that last sentence has two meanings. I agree that games
shouldn't be done by scientific rules. It's not a math like application
that should be mathematically correct up to x degree. Games should be
highly complex applications that should be factually highly likely
correct up to x degree. (Yes programmer, or designer, shouldn't know if
that game is correct always if that game is complex enough.)
Basically if you have function that accepts integer (All terms here are
in the Java meaning, or MASM32 meaning.), then outputs another integer.
It could have 2^32 different input values, and 2^32 different output
values. It doesn't matter if that used function isn't mathematically
correct (if it's just a guess) if used function is correct for every
possible input, or output. Actually output matter more. I'm frequently
faced with ideas, from people that doesn't know original code that it
should use some other "function" that would have folding properties:
it would be slower.
It would break rest of the code.
It would break functionality of dependent methods and libraries.
It would be precise for just upper/or less frequently lower 16 bits (from
32 bits),
What wrong with people? Are they afraid of complexity? Are the dependent
too much on theirs previous education (that was wrong of course). Or are
they just lazy?
Another funny mistake is believe that single precision is faster than
double precision. First at all we are talking about Intel architecture.
On the Intel architecture, as everyone knows, floating point numbers are
stored in 80 bit precision. Doesn't matter if they are single (32 bit),
or double (64 bit). Of course GFX card could use just FP32. It would
seems that it's unnecessary to use double precision at all. Wrong. As
everyone who played with complex lightening knows, attempts to use as low
precision as possible are recipes for disaster.
A simple example. You have one color channel, its precision is 8 bits.
Now you are trying to do a 3 light calculations in the row and use lowest
precision possible: 8 bits. 0xFF turns to 0x0A - 0x12 - 0x41. now imagine
you'd use a something more complex 0x0A1 - 0x11f - 0x40. Funny this is
somewhat different result.
See? Transformation matrices aren't much different.
Actually I listened that Intel developed SSE instructions "for games". If
this is correct then this excuse isn't valid. Games need parallel
operations, thats right, however low accuracy of these operations was a
big hindrance. As a result of these thoughts most common gaming machine
Athlon 2500 barton has unaccelerated parallel 64 bit floating point
operations. Celeron-Ds aren't exactly nice either. What I would like to
see would be 8x64, 4x128, 16x32. or even better 16x64, 8x128, 32x32. This
would be roughly enough, though I would, in my current program, need
roughly 256 bit precision.
I would be interested by hardware acelerated 128 bit floating precision
numbers, and 128 bit integer arithmetic.
Yes I agree complexity and precision comes with the cost. The cost isn't
cost of the hardware, or speed costs, the cost is the brain of the
programmer, and creativity of the hardware developer.
--
Kizutsuite 'ta ano hi kara
Hi today is my birthday, and because I'm alone, I though I'd create this
article.
Basically it's not suposed to be trolling, but a proposal to a lengthly
discussion.
This article is IMO on topic in java.advocacy, ai.games, and
games.programming.misc. I'm unsure if it's on topic in the comp.theory.
I listened sentences like "Don't do it too complex", "Simple function is
better than complex one", "Don't bother with anything computationally
expensive, it would unnecessary drop speed of your program", "It would be
hard to maintain", "Simple loop would do", "It's just a game not
scientific application".
After I heavily programmed over 3 years, I think all of them were wrong.
Actually that last sentence has two meanings. I agree that games
shouldn't be done by scientific rules. It's not a math like application
that should be mathematically correct up to x degree. Games should be
highly complex applications that should be factually highly likely
correct up to x degree. (Yes programmer, or designer, shouldn't know if
that game is correct always if that game is complex enough.)
Basically if you have function that accepts integer (All terms here are
in the Java meaning, or MASM32 meaning.), then outputs another integer.
It could have 2^32 different input values, and 2^32 different output
values. It doesn't matter if that used function isn't mathematically
correct (if it's just a guess) if used function is correct for every
possible input, or output. Actually output matter more. I'm frequently
faced with ideas, from people that doesn't know original code that it
should use some other "function" that would have folding properties:
it would be slower.
It would break rest of the code.
It would break functionality of dependent methods and libraries.
It would be precise for just upper/or less frequently lower 16 bits (from
32 bits),
What wrong with people? Are they afraid of complexity? Are the dependent
too much on theirs previous education (that was wrong of course). Or are
they just lazy?
Another funny mistake is believe that single precision is faster than
double precision. First at all we are talking about Intel architecture.
On the Intel architecture, as everyone knows, floating point numbers are
stored in 80 bit precision. Doesn't matter if they are single (32 bit),
or double (64 bit). Of course GFX card could use just FP32. It would
seems that it's unnecessary to use double precision at all. Wrong. As
everyone who played with complex lightening knows, attempts to use as low
precision as possible are recipes for disaster.
A simple example. You have one color channel, its precision is 8 bits.
Now you are trying to do a 3 light calculations in the row and use lowest
precision possible: 8 bits. 0xFF turns to 0x0A - 0x12 - 0x41. now imagine
you'd use a something more complex 0x0A1 - 0x11f - 0x40. Funny this is
somewhat different result.
See? Transformation matrices aren't much different.
Actually I listened that Intel developed SSE instructions "for games". If
this is correct then this excuse isn't valid. Games need parallel
operations, thats right, however low accuracy of these operations was a
big hindrance. As a result of these thoughts most common gaming machine
Athlon 2500 barton has unaccelerated parallel 64 bit floating point
operations. Celeron-Ds aren't exactly nice either. What I would like to
see would be 8x64, 4x128, 16x32. or even better 16x64, 8x128, 32x32. This
would be roughly enough, though I would, in my current program, need
roughly 256 bit precision.
I would be interested by hardware acelerated 128 bit floating precision
numbers, and 128 bit integer arithmetic.
Yes I agree complexity and precision comes with the cost. The cost isn't
cost of the hardware, or speed costs, the cost is the brain of the
programmer, and creativity of the hardware developer.
--
Kizutsuite 'ta ano hi kara

