DX 10.1 games comig soon

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

duzcizgi

Distinguished
Apr 12, 2006
243
0
18,680
We'll see. :) In fact, the R7 can work as multiGPU (not CF, but working as symmetric multiple processor) single framebuffer. THIS is the revolutionary part in fact. Watch out for 4870x2. They may include some Havok and terrific graphics together with some bundled demos. ;)
 

Slobogob

Distinguished
Aug 10, 2006
1,431
0
19,280

I guess the flexibility of it will come in handy. When i read the Anand review showing the different caches and buffers it reminded me more and more of a simple processor and it's registers. I'm just hoping that AMD is up to the task since this undertaking of theirs will require more than just solid hardware. They will need some able programmers to lay out the software framework for things to come. Failing at that will cost them dearly.
 

duzcizgi

Distinguished
Apr 12, 2006
243
0
18,680
If I continue, probably I'd be fired! :D
But yes, I'm a little bit pissed off with inefficient programming.
You know they just say, "Well if our program doesn't work on your computer go buy a higher configuration." regardless of what the program does.
Excel 2007 is spending 80% of its time drawing some fancy graphics on menus and sidebands etc.
 

SpinachEater

Distinguished
Oct 10, 2007
1,769
0
19,810


Haha, MS probably doesn't fall under that category.... :whistle:





OOP :eek:
 

duzcizgi

Distinguished
Apr 12, 2006
243
0
18,680
Before I saw the W2K codes which leaked a couple of years ago, I was thinking that MS was terrible in programming. But after I saw the comments of the programmers saying like "I hate doing this but in order to make Photoshop work, I have to do it!"
Windows is an API at the end, and programmers are supposed to write programs according to the documentation. Most of the programmers don't even read the documentation. They just google their problem and grab the first piece of code written by someone and apply into their project.
Later we see that there's a bla bla problem because of bla bla undocumented function used by xyz program and MS needs to fix it.
What the heck? MS didn't document that function in use! But what? A smart guy used it and forced MS to document and fix a function which may be depreciated or never intended to be used and was a leftover from previous versions etc. etc.
Whatever. I don't blame companies about inefficient programs. I blame the programmers. I'm a programmer and I am also writing inefficient code. But any program that I write can at least happily work on a PIII 600 MHz.
 

duzcizgi

Distinguished
Apr 12, 2006
243
0
18,680
shh, pro mircosoft comments will get you hunted down and terminated. you have been warned.
I'm not pro or against anybody. :) I'm against people accusing others without any inner sight of what's really happenning there.
And, well you can hunt me down but, if there wasn't MS, we'd be buying PII 300 computers with 16MB RAM and running DOS. :D
 

duzcizgi

Distinguished
Apr 12, 2006
243
0
18,680
OK, back to topic:
1. DX10.1 will be a step ahead, if game engine writers would support it.
2. ATi/AMD is ready for it, after being beaten bad for 18 months already.
3. nVidia still isn't ready for it and promoting TWIMTBP in order to gain time.
4. Whatever they (ATi/AMD or nVidia) do, they feel the heat of intel on their necks.
5. nVidia needs to roll out its own all SP architecture before Larrabee is out or it'll be in a big trouble even if it's sitting on the cash pile.
6. Maybe for the first time, AMD is utilizing ATi's good relations with intel to put itself in a better position for the upcoming GPGPU war. (Havok)
7. Well, programmers like it or not, they'll have to throw away their old DX9- age engines and start writing new engines for the new era.
 

duzcizgi

Distinguished
Apr 12, 2006
243
0
18,680
ah yes, the chicken and the egg of computing, does software use more resources because of increases in hardware performance or does hardware keep inproving to keep up with software's demands?

Sort of.
But then, if there was no Windows, there wouldn't be DirectX and we wouldn't be talking about DX10.1 debate here. :)
OpenGL was just GL those times, under SGI's royalty, limited only to workstation graphics.
At the end, we demanded more from our hardware and software. And they responded accordingly.
The most important role MS played is, they've standardized the environment. I'm old enough to remember searching for WordStar and Lotus 1-2-3 printer drivers for a new printer I bought. :D
 
@ duzcizgi.
This point you made " 7. Well, programmers like it or not, they'll have to throw away their old DX9- age engines and start writing new engines for the new era."

This is the whole pivot point in my opinion. From a purely gaming point of view. The people who heard the bad stuff about Vista at launch and are digging there heels in as far as moving from XP (Myself included) wont adopt the newer OS unless they start seeing benchmarks with serious FPS over a range of conditions and with different engines.
Personally I am very impressed with the new ATI cards, Given we can expect more gains if given properly coded DX10.1 games it would probably only take a couple of such games to make me seriously think about upgrading. Obviously there is the whole Windows 7 issue to take into account. However its the devs that need to take this step and start making games that are significantly enhanced in a Vista/windows 7 environment. Of course making games that will only operate in such an environment isn't good business sense and i wouldn't suggest that, but until we see these new engines the difference between DX9 and DX10/10.1 just isn't worth the difference.
I have just built a new machine and installed XP on it with a view to running it for many years to come. Unless something/some develops something to convince me otherwise.
Mactronix :)
 

radnor

Distinguished
Apr 9, 2008
1,021
0
19,290


A that was the single-most important thing they did. With all the bumps and turns the way gave to them. And loads of money as well.

Now it is time for WINE. And for Nvidia to return to their origins. Read carefully and understand the last sentence.

Quoting Wikipedia:

A fresh start

Nvidia's CEO Jen-Hsun Huang realized at this point that after two failed products, something had to change for the company to survive. He hired David Kirk, Ph.D. as Chief Scientist from software-developer Crystal Dynamics, a company renowned for the visual quality of its titles. David Kirk turned Nvidia around by combining the company's experience in 3D hardware with an intimate understanding of practical implementations of rendering.

As part of the corporate transformation, NVIDIA abandoned proprietary interfaces, sought to fully support DirectX, and dropped multimedia-functionality in order to reduce manufacturing-costs. Nvidia also adopted the goal of an internal 6-month product-cycle. The future failure of any one product would not threaten the survival of the company, since a next-generation replacement part would always come available.

However, since the Sega NV2 contract remained secret, and since NVIDIA had laid off employees, it appeared to many industry-observers that Nvidia had ceased active research-and-development. So when Nvidia first announced the RIVA 128 in 1997, the specifications were hard to believe: performance superior to market leader 3dfx Voodoo Graphics, and a full hardware triangle setup engine. The RIVA 128 shipped in volume, and the combination of its low cost and high performance made it a popular choice for OEMs.

Ascendency: RIVA TNT

Having finally developed and shipped in volume the market-leading integrated graphics chipset, NVIDIA set itself the goal of doubling the number of pixel pipelines in its chip, in order to realize a substantial performance-gain. The TwiN Texel (RIVA TNT) engine which NVIDIA subsequently developed could either apply two textures to a single pixel, or process two pixels per clock-cycle. The former case allowed for improved visual quality, the latter for doubling the maximum fill-rate.

New features included a 24-bit Z-buffer with 8-bit stencil support, anisotropic filtering, and per-pixel MIP mapping. In certain respects (such as transistor-count) the TNT had begun to rival Intel's Pentium processors for complexity. However, while the TNT offered an astonishing range of quality integrated features, it failed to displace the market leader, 3dfx's Voodoo 2, because the actual clock-speed ended up at only 90 MHz, about 35% less than expected.

NVIDIA responded with a refresh part: a die-shrink for the TNT architecture from 350 nm to 250 nm. A stock TNT2 now ran at 125 MHz, an Ultra at 150 MHz. Though the Voodoo 3 beat NVIDIA to the market, 3dfx's offering proved disappointing: it was not much faster and lacked features that were becoming standard, such as 32-bit color and textures of resolution greater than 256 x 256 pixels.

The RIVA TNT2 marked a major turning-point for NVIDIA. They had finally delivered a product competitive with the fastest on the market, with a superior feature-set, strong 2D functionality, all integrated onto a single die with strong yields, that ramped to impressive clock-speeds. NVIDIA's six month cycle refresh took the competition by surprise, giving it the initiative in rolling out new products.


G80 G92 are here for too long now. GTX200 might not be "down-scalable" to complexity and costs. DX10.1 is here for sometime now.

Games ussually take a 2 Year cycle to develop. Assassins Creed already came DX10.1 Enabled. i believe Age of Conan should have been too (as advertised), but they pulled it in the last minute.
Honestly ? If Nvidia don't move fast enough, will have the same problems as Windows Vista Capable/Premium Ready.
 


Well remember that shader based AA is actually part of the DX10 spec, it's the minimum 4X support that's been added to DX10.1 (and not that everything get 4XAA but all card must support at least 4XAA should they be asked to).

Call of Juarez created a big stir because they sued the option of shader based AA and nV can still do it, just like AMD, only their hardware loses the benfit of their hardware resolve in the RBEs.

It's like putting a Buick Grand Nat GNX with a 3.8L turbo up against a Mustang 5L. When run full out the CNX wins, however disable the Turbo on the GNX and the Mustang will outperform.

The erro for AMD was thinking that developers would prefer pixel correct AA versus the short-cut AA of standard hardware resolve. That turned out to be a costly error from a company that used to be the more efficient at AA. They've always pushed the envelop on their AA allow full FP16 support in the X1K despite most people saying it would be way too memory hogging (whcih it would've been had they not changed the buffer).
I'm sure to them shader based AA was that same 'logical' extension that was already (and remained) part of the DX10.0 spec, however they risked the all-or none method thinkng that they could still do the old method fast enough through the shaders , which is true compared to the X1K/GF7, however with the large increase in ROP & TMU # from nV and the nearly free 2-4X AA it really made the difference in traditional AA performance noticeable. But run them both in shader AA mode and you'll see the G80 struggle greatly and switch performance positions significantly. Even the R600 can do hardware resolve in the ROPs but only on fully compressed tiles.

Main thing is that people haven't really cared enough to make it an issue. The shader based AA issue didn't garner even a fraction of the traction that the Oblivion FP16 HDR+AA issue did, so developers aren't about to add something that only seems to hurt performance with only tiny hard to defend differences. So it's not just them being lazy, it's also the gamers being indifferent.
It's kinda like the CD/MP3 situation, the fans want DVD-A/SACD etc, whereas the majority of music buyers though don't even care about it being 128bit (maybe 256bit) MP3 let alone CD quality.

Anywhoo, the benefits of DX10.1 go far beyond that AA situation which remains relatively unchaged from what the G80 support except for supporting the ability to store/read colour/depth information from the buffer for input to other components which should improve AA both quality and speed.

I don't have the resources here at work, but this presentation at WinHEC is a good quick guide;
http://download.microsoft.com/download/5/b/9/5b97017b-e28a-4bae-ba48-174cf47d23cd/PRI022_WH06.ppt
 

rangers

Distinguished
Nov 19, 2007
1,563
0
19,790
whats the point supporting DX10.1 if ATI doesnt even have a card that could top GTX280?

And whats the point supporting DX10.1 if they cant even beat Nivida latest card in DX10? lol


is that all you can do is bash ati, if you want to get ripped off by nvidia thats up to you, normal ppl will be going with ati this time.

just let it go numbnuts
 

duzcizgi

Distinguished
Apr 12, 2006
243
0
18,680
@mactronix
I used both nVidia and ATi cards. I've even used Cirrus Logic and S3 too! Anybody remembers Trident?
Well at the end, I'm a programmer. I'm not a HC gamer. I play games from time to time, when I find time and enthusiasm to do so.
I'm looking at the situation from the point of view of a programmer.
I see in ATi a great opportunity! (Hey, I wish it was here when I was giving my graduation project from Physics!)
And also, CUDA is awesome! (I wish it was here when I was giving my graduation project from Applied Mathemathics!)
At the end, GPU is no more GPU. It's a parallel processor, with varying degrees among brands. In fact GPUs of today can be considered "Massively Parallel Processing Units"
So, tomorrows programmers should use it. In games in programs etc.
I wish not only game programmers but also Stock Market Analysis Programs also use them!
 
Raytracing though is a tough sell, it's an end game IMO, and what needs to be done now is support ray-tracing, but focus on something more of a mix like ray-casting with razterization Hybrid that's been propsed for many years (first hear about it in 1999/2000.

There are currently benifits and limits to both. Rasterization's primarily being low overhead and efficiency. However as we wrok forward then we'd want the more accurate but tougher raytracting.

It's not quite related to DX10.1, but it is the future beyond it (although likely more of a DX11 competitor or successor IMO).
 
Kind of puts a damper on nVidias words when they said devs dont want to do DX10.1 . It looks like they dont want DX10 either. They cant google something that hasnt been written yet. Anyone that supports that way of thought SHOULD be stuck using a PIII, with a Trident heheh. My point is, this is the future, theres no reason to want to hold it back, or slow it down. If youre an nVidia fan, we all know Intels about to jump into this game, then wouldnt it be wise of both nVidia and ATI to make as much progress using whatever is available INCLUDING DX10.1 to make it just that much harder for Intel to enter the market? Make them have to reach several rungs higher ? The benefits of DX10.1 are real, its time we used them
 
Yep, I agree with that JDJ, and been saying something similar for a while.

Making people hem-haw about DX10.1 and even DX10 kinda prepares them to do the same about DX11.

The way that nV and ATi have kept any other real competitors from entering the market has been to push the envelope so far ahead that the other couldn't keep up even for the mid-range. Just look at S3, I had hi hopes for them, but really with the GF8800GTS and HD4850 both selling for well under $200 what's left for them except the very low end fighting for even smaller margins?

It is tough for them to do a redesign, but I think their focus on AMD/ATi kinda miss the point that intel's coming, and that's not going to be like facing S3 or XGI, it'll be real competition with a company that's at least their equal at PR/Marketing as well as Dev influence.
 
Well, if the 4xxx series with their price/performance coming out the door the way it is, is a sign of things to come, then all we need is a good response from nVidia, and not a 650$ monster either. Then that will make it just that much harder, oh and, theyll need to adopt DX10.1 too heheh
 

spathotan

Distinguished
Nov 16, 2007
2,390
0
19,780
Even the S3 Chrome GT supports DX10.1. There is no logical reason as to why Nvidia will not support DX10.1 other than they have to much pride to give off any sign that they are actually "following in the footsteps of ATI" for the first time in ages.
 

crosshares

Distinguished
Apr 30, 2008
107
0
18,680
:lol: i hate that the page #s are so small here.

I don't expect too much of Windows 7, if it still supports legacy code from way back in the days of 98, then 7 will just be another epic fail.

Either M$ scraps legacy support, (to XP SP2 at the oldest), or its time to lay the NT kernel to rest, i vote for the latter :D
 
Status
Not open for further replies.