DX11 to show up soon?

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.


Yes I did know that, but i felt it was fair to pretend it didn't exist seeing as Nvidia doesn't have it and it never got used because of that anyway.
 


I assume that last section is nV and Intel, not AMD.

Anywhooo, remember that the Tessilation in DX11 is handled differently and can call upon a Hull Shader and Domain Shader, both of which would be required in hardware to make it work, so slightly different but not a straight transition and would require software emulation on an HD2-4K card to enable.

ATi/AMD gloss that over with their 'history of Tesselation' bit with the original TruForm included.

You'll probably also see a re-organization of the shaders for a more efficient compute-shader model, but a little less of the change nV is likely to make to make their shaders more effficient in a compute role.
 
So Nvidia are building a tesselator from scratch, dx11 even though they skipped 10.1, sticking more gpgpu stuff on and moving to gddr5 while shrinking to 40nm. Does anybody actually believe this is possible lol.

The reality is, they might as well sit this round out. Maybe concentrate on fixing those laptops instead.
 
Well nvidia is a big corp which should have a good r&d department I have good faith they will deliver on their refinement release of DX11 cards. Nvidia didn't skip dx10.1 but didn't want to change their processes for something that they know wont last that long always.

ATI always promotes them self as the best though innovation while nvidia takes the approach as best though refinement and making what works better.

ATI can take that approach due to the fact they aren't leading in nvidia vs ati and they have a smaller processing facility which cost less to change their practices from one card to the next.

edit:G80 vs R600 was rape cake T_T i hope it doesn't turn out like that again ATI did horrible vs nvidia making nvidia charge *** loads for their premium produce because ati couldn't deliver and made it worse with the G92 release.

Also rape does not trigger the censer lol
 
Kinda to the point tho, as DX11, Tesselation etc etc are all brand new, and nVidia has done the pooch when going to new processes also. I see jennyh's point, but at most itll only take more time for nVidia
 


They were a big company 'with a good R&D dept' when they made the FX series too. Doesn't mean much.

Nvidia didn't skip dx10.1

Yes they did, and went so far as to downplay it and say their reasoning was transistor budget.

ATI always promotes them self as the best though innovation while nvidia takes the approach as best though refinement and making what works better.

They each do that when it fits their own current situation, that wasn't the party line during the GF6800/X800 era , and nV didn't expect that during the R9700/FX era before it either.

ATI can take that approach due to the fact they aren't leading in nvidia vs ati and they have a smaller processing facility which cost less to change their practices from one card to the next.

ATi and nV use the same facility, TSMC.

As for who's leading, nV loses money making and selling their GPUs while ATi is AMD's main profit source to stem the flow of red from the CPU division, so really who leads whom right now?

It's also unlikely that the RV8xx series leaves itself open for a repeat of the G80 vs R600 situation as it doesn't appear to have the second launching part, and also doesn't have to worry about a different node than the competition, both will be using 40nm and should experience the same issues if there's a repeat of the problems that were experienced with 80nm HS.
 
Rumors are still running that G300 will be a monster, high performance, not only in gpgpu, but gaming as well. But itll cost ya

This actually bugs me big time. Nvidia STILL hasn't scaled the GTX chips down to the lower segments. They renamed the G92 chips and sold them as new. Now (rumor has it) they will make an EVEN BIGGER CHIP that supports DX11? If they couldn't scale down the GTX, how will the scale down the G300? Will they support DX11, but only for those who can pay their price? They will support DX11 in the high end, and support the other segments 12-18months later? This is seriously F'ed up, and I hope this isn't the case.
 
ATI always promotes them self as the best though innovation while nvidia takes the approach as best though refinement and making what works better.

Except for the 6xxx series against the x8xx series. The 6xxx supported SM3, while the x8xx only supported SM2. At least AMD was big enough to admit they didn't support it this round, and they would the next. How many rounds has Nvidia skipped DX10.1? (I guess two as long as you don't count the 9xxx chips as new.)

Companies always put the best spin on things. If it suits the marketing dept to claim SM3 support before their competitors, then they'll do it. But if they fire back with something else, well then they will claim XXX excuse. What matters to me is AMD said next round and did it. Nvidia only seems to rename chip, and put spin on things.

Edit: fixed quote tag.
 
Well, its speculated that G200 should shrink to 40nm for mid range and the good ol G90s for entry/low end. A higher clocked, much smaller G200 should outperform R700 series, and be priced/margined well for them, if the 40nm problems are solved.
Speculation is, they have (ATI) have their next gen ready, but due to the 40nm problems, theyre trying to ramp production/inventory for release times, so therell be no shortages like we see with the 4770
 
A 40nm g200 might outperform a 40nm r700, but will it outperform the bottom end r800?

ATI dont just hold all the aces, they hold all the cards (no pun intended), they are in complete control of the market except for the very, very top. They will get that back soon of course with the 58xx and the 58xx X2. I will be disappointed if the single gpu 58xx isnt at least as fast as a gtx295 however.

If we fast forward 3 months, the gpu market could easily look like this :-

5870x2 an absolute mile ahead of anything else (possibly 2x faster than a gtx295).
5870 in second place tied with gtx295

Now, assuming Nvidia shrinks the g200 and cuts prices as much as they can (dont forget the 448 and 512 bit buses), those 285gtx, 275gtx and 260gtx still have to be faster than the equivalent 5850 and 5830's. The difference again is, ATI will make money on these small chips while Nvidia keep losing it.

The good old g90's will be crushed by 40nm 4870 and 4850's.

Nvidia need a miracle to just stay in the game next year. They are 3-6 months behind and although I expect the g300 to be a real powerhouse, it could already be all over for them by the time it's released.
 


But that's the same about DX8.1 vs DX9, DX9 vs DX10, etc.

A still screenshot does little to expose any differences, especially in things like efficiency, which is the name of the game with DX10.1 & DX11, not dramatic visual differences like DX7/8 to DX8.1m where suddenly you had shiny water/surfaces.

How are you going to appreciate the benefits of tessellation, memory optimizations or better multi-core support in a 2D image?

You could emulate almost any static visual in DX10 using DX8.1, so screenies of a sandbox isn't really going to do much. Even something like HDR, specular lighting wouldn't be truly appreciable until you had video, and a fleshed out scene with backgrounds and other objects to interact with.

I'm going to reserve judgment until there's motion video of DX10/10.1 alongside DX11 trying to achieve the same final output.

Also remember this is early hardware with next to no driver development, so I'm not expecting much for a while other than developers doing alot of crass demo stuff at first.
 
No not really. Think of Tesselation taking a standard 70,000 plygon model running at 2-3 FPS, and instead turning that into a 5,000 plygon tesselated model that looks the same which runs @ 14fps, it's a massive improvement, but you wouldn't SEE a difference in a screenshot, and wouldn't know 14fps is 'good'.

If tesselation works properly you shouldn't be able to tell the difference between a hi-poly model and a low poly tesselated model, so a still wouldn't tell you much, and fps alone wouldn't either, unless, like I said you had another card running DX9/10/10.1 beside it to compare.

One thing is for sure, it's far from an incremental update from DX9, it's long since been a complete split as to how things are done, both from a threading, memory and feature perspective.

But also like I said, you can make DX8.1 look just like something rendered in DX11 in a still shot (same with software/CPU render can look like a DX11 output), the tough thing is to make them similar in full motion video.
 
I'm not sure what he means by it either, but it's probably a loose 'can't do that', ie it can be done but it would take ages to process on a cpu compared to a gpu.
 
Not sure how its broken down, but maybe using the compute shaders vs cpu, its requiring only a single pass, and thus the latency is slashed to the point where the strain on either or both cpu and gpu would be able to make it work vs reducing the amount, or severe fps lags
 
SS, I think he meant that computing thoughts (fear, direction choice, grouping, etc) for each character and having them all interact would be something a single CPU couldn't do as 'real-time' with so many characters on the screen at the same time. It really depends on all the features/emotions/choices that are accounted for by that model.

Now just give me better bots in UT ! :sol:
 
What if Nvidia has got something up its sleeve that's going to make us fall flat (flat broke that is)?
http://www.hexus.net/content/item.php?item=18666 It may be highly unlikely but who knows maybe AMD has a very good offer and they are going to dump ATI (that would be crazy,at least from what we can see) because if they keep ATI in the new deal it would turn into a monopoly that is going to drive prices up up and away (and it may actually hurt them more than help them if they keep ATI). This offer could be plausible because given the fact that Nvidia hasn't shown much in GPU's lately (not that ATI has shown too much but at least it's a hole lot more) and the other fact being that it needs a way to enter the CPU buisness without being sued or something by Intel (by owning X86-64 Nvidia would nearly force Intel to allow them to use the license for the X86 that AMD has (The last processors Intel manufactured which did not use AMD's x86-64 design were early versions of the desktop Pentium 4 "Prescott", introduced in February 2004, and mobile Intel Core introduced in January 2006.)) But back to DX11 it may be something of a great deal that ATI has managed to strike http://www.youtube.com/watch?v=ghazN5L7Ncw and http://www.youtube.com/watch?v=-eTngR6M37Q . I believe that the first games to use DX11 are going to be released by the four companies that are in the video (gee ain't I a Sherlock) but seriously the release of the AvP from Rebellion (early 2010) and the fact that Phenomic wants to introduce DX11 on Battleforge show that we might even have some games that will have DX11 when the new HD5870 gets released. I hope they get some companies that release highly rated games to work with on implementing DX11 they kind of need it after the very few games that had DX10.1 some are barely known.
Edit: Also check this link about DX11 and the head in the second video. http://www.pcgameshardware.com/aid,686395/AMD-shows-DirectX-11-Tessellation-in-action/News/