Why on earth would you want to program graphics at a low-level. It would take forever to create even a small number of visuals. That's why we have high-level languages in the first place. The only reason I can see working with low-level code is for performance purposes.
I also find it very hard to believe that there are any huge restrictions in what a developer can create with a high level api, versus low-level coding. And with graphics engines like CryEngine and Unreal, the case this guy's making seems weak.
Me and other developers were discussing this for years. Its nice to see Huddy took the step forward. At the end of the day, writing an assemble for a diverse market is not wise. The gap between consoles and PC is getting narrowed. like or not.
Actually coding for the game engines and visual api for a complete product is a lot more difficult than people think. Developers are looking for ways that won't put a cap on what they want to do but over all they should focus on game play experience and script rather than just visuals as many have only done over the past decade. Remember the 90s ware one didn't know just how far things can be pushed across the board?
[citation][nom]Stryter[/nom]Why on earth would you want to program graphics at a low-level. It would take forever to create even a small number of visuals. That's why we have high-level languages in the first place. The only reason I can see working with low-level code is for performance purposes.[/citation]
So because you can't program graphics at a low level means professionals shouldn't either? Plus, you answered your own question. Working with graphics on that low of level would give massive performance improvements.
We aren't talking about chucking the API altogether... just allowing developers the choice not to use it. Forcing them to use the API is rather... Apple-esque.
I guess the part of "all games looking alike" has some truth to it, when thinking about shooters like moh, cod etc... they have a point. But the costs of making a video game are high enough as it is. If every game developer is gonna make his game from the ground up, it's gonna take forever. For one, I would like to see diversification myself, but I think it is nog longer possible in this current economy.
[citation][nom]Anomalyx[/nom]So because you can't program graphics at a low level means professionals shouldn't either? Plus, you answered your own question. Working with graphics on that low of level would give massive performance improvements. We aren't talking about chucking the API altogether... just allowing developers the choice not to use it. Forcing them to use the API is rather... Apple-esque.[/citation]
what? he didn't say somebody FORCES to use the said API. he said, it would take FOREVER (sorta...) to program at low level. he also didn't say you can't do it either. he was pointing out that the guy here is blabbering the obvious. low level > high level langauges -- BUT reality check people. it would be absurd to program a massive game with low level. if blizzard uses this, it would take at least 50 years to develop starcraft 3. by then, the "visual innovation" you came up with would look like crap with direct xXX (twenty).
I actually think it's kind of ironic that AMD has someone saying this, as low-level (likely non platform-agnostic) graphics programming would end up hurting AMD since developers these days seem to favor nVidia.
I think many of the developers asking for the direct control have been spending too much time with consoles (where everyone has the same GPU). The logistical nightmare of supporting so many different GPUs on the market would make game development costs and times skyrocket to get the performance they want with this low-level optimization with different GPUs. Even if you limit the GPUs, optimizing heavily for (for example) Radeon 5870, 6970, GeForce 485 and 580 will be FAR more difficult than just doing Xenos in the X360. Not to mention throwing in Crossfire/SLI and GPUs that launch after the game...
What if they create smaller apis that run parallel with each other, one like DX that keeps the system safe, and another that is in direct contact with the graphics card drivers? (kind of like, the graphics card drivers will determine how many shaders will and can be used on a system, and what effects go where.
It is true that even a small and portable psp go has great graphics for it's size and power consumption, but a pc has the possibility to run many of those games at much much higher resolutions (esp when using tri- or penta screen setup).
DirectX was originally developed to essentially by-pass Windows because Windows was unsuitable for gaming. Unfortunately Alex St. John wasn't a good politician and didn't stay with Microsoft long and DirectX began to lose it's focus as soon as he walked out the door. It probably never was what he really wanted it to be, but St. John turned Windows around for gaming, making an operating system that was poor for gaming into one that was great for gaming. DirectX even came to surpass OpenGL, and while devs might bleat about DX11, it's the best API suite there is today.
Microsoft has ignored Alex St. John's methods and message, what made DX a standard for game developers and what made them happy to use it. They've made Windows more a part of how it works, not less, and that's a bad thing.
Here's an Alex St. John interview that I think pretty much sums things up, from a few years ago:
All that being said though, what is holding game developers back is the developers themselves, not any API. That's ridiculous. DX is just a tool to make life easier for you, it's not a limitation. Step out on the wild side, my friends, and expand what can be done instead of gripe about what can't be.
This argument is self defeating. If a PC graphics card is 10x more powerful than a console, then the only benefit of this is if you can get greater than a 10x performance increase from doing it.
The gap between consoles and modern video cards is narrower already than it should be because devs don't want to (or can't afford to) take the time to optimize a game for the superior capabilities of the PC, so how on earth is it supposed to benefit anyone if we split the PC market into multiple segments, each of which they would have to optimize for?
They'd be better off developing for the PC first and then dumbing it down to work on a console. The the low level optimization that is possible on a console could then be used to squeeze the most performance possible out of the ported engine.
[citation][nom]wiyosaya[/nom]DirectX is an M$ specific api. How could it be anything other than bloatware?[/citation]
Actually DirectX beats the hell out of its nearest competitor, OpenGL. Even John Carmack, an OpenGL DIEHARD (who is still using OGL), has admitted that DirectX is better now.
Anyway, the only way this would work is if their low-level render path was only ONE of multiple render paths. In other words, have a low-level path or two, and then also have a standard (but slower) DX 11 or OGL render path. If you did this, then you could run the game on future hardware without problems.
If you DIDN'T do this, then there's no guarantee that your low-level render paths will work properly on completely new GPU architectures a few years down the road. One of the advantages of higher level APIs is that your underlying hardware can completely change, and as long as the drivers and hardware can work together to be compliant with the APIs, then it doesn't matter what is under the hood.
Not to mention the fact that DX11 is pretty well threaded, which really helps reduce bottlenecking. I imagine that they'll only push this even further in the next version.
Yeah his argument makes no sense. He basically says : DirectX is making it so PC games can't take advantage of the true power of modern discreet GPU's...of course, getting rid of DirectX would result in games that A) Made PC's constantly crash or B) Cost WAY MORE to develop, making them not profitable at all. Basically that means getting rid of DirectX is impossible. So...is he just whining?
I dont know a whole lot about professional game development, but I do know that when you code something at a low level it takes longer and doesn't really lend itself to portability. The way I see it (from a perspective based on limited knowledge) is that you would have to optimize your game for tons of different hardware configurations, which is a big deal for the PC. With consoles there is a standardized platform so those issues aren't nearly as troublesome. Also, if you code at a lower level without an API aren't you going to have to implement graphical effects like shading almost from scratch? It just seems that this guy is talking about what is theoretically possible, not what is realistic.
It doesn't even make sense for this AMD fellow to be talking about the limitations of the DirectX API unless he's talking about the Xbox 360, because it doesn't seem like too many companies even care about the PC game market that much; for many companies it seems like a PC version of a game is just an afterthought. How many companies even make dedicated PC games anymore? Almost anything anymore is a console port, as evidenced by many PC games like Mass Effect 2 not even supporting the latest DirectX version (DX9, two versions behind). Also, some games that are ported to the PC don't feel right. Either the controls are kind of off (Dead Space 1 was almost totally unplayable on the PC without a game controller), the game mechanics are designed for a console or there is some other problem.
I can't imagine many companies taking the extra time to code a PC game at a lower level than the DirectX API when they won't even do more than the most basic PC version. It seems like the first step to improving graphics on PC versions of games is to actually support the latest DirectX version. Once game developers start porting console games to the PC with advanced DirectX 11 features available then they can complain about the API limiting them; it makes no sense for developers to complain when they aren't even using the latest API.
This article would make more sense if he was talking about console development, because those platforms have standardized hardware. Even then, the Playstation and the Wii use OpenGL; only the Xbox uses DirectX (DirectX9 to be precise). So the only way this article makes sense to me is if it is talking about the Xbox (it is clearly talking about the PC).
Thunderfox: yes, what you're saying makes more sense. Make the game for the PC first, then port it to the consoles. I think this is what Crytek has done with Crysis 2. As I understand it, they make a game for the PC and their CryEngine 3 development platform makes Xbox 360 and PS3 optimized versions.
PC to console porting still doesn't solve tbe problem of making sure games' game mechanics and design fit the platform they are on. How many times have you played a console port or a PC port only to have it not "feel right" and/or be as fun as its primary intended platform?