Nvidia Boasts New Driver Surpasses AMD's Mantle in Games

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

sjc1017

Distinguished
May 4, 2011
69
0
18,630
Well with Mantle you need a GCN compatible GPU, with this being a driver update I was wondering whether it would be universally advantageous.
 

scannall

Distinguished
Jan 28, 2012
354
0
18,810
Like everything from Nvidia you need to take it with a very large grain of salt. They do make good stuff and all, but it's always best to wait for independent testing.
 
It really gets to me the whole "backward compatible" arguement to try diminish something. The VLIW4 parts in the HD6000 is around 2009/2010 technology at the time probably not in the engineers mind to make cards to support Mantle at that stage. Similar with drivers, New AMD drivers don't support all older technologies and this comes down to you have had 4-7 years to catch up to modern technology and reap the rewards of such. It sounds harsh but you cannot future proof nor can you benefit on antiquated technology.
 

sjc1017

Distinguished
May 4, 2011
69
0
18,630
Well the question about backwards compatibility was not aimed at AMD, I was wondering whether, this being a driver update, it might benefit older hardware (I have nvidia in my two year old laptops and amd in my desktop). It seems a reasonable question to ask "will it benefit my ageing hardware". I don't dispute one can't expect older hardware to benefit but it's still a question that arises naturally. It being a driver update you wonder whether there might be an advantage to all nvidia hardware. AMD have made clear from the outset that you need GCN generation hardware to benefit from Mantle, so no issue arises since you know the conditions, with this, there is little information given in the article apart from the mention of a 780.

 

heero yuy

Distinguished
Jul 25, 2010
494
0
18,810


it will support all cards NVidia supports (not the 300 series and possibly the 400 and 500 series i am not qutie sure)

 

daglesj

Distinguished
Jul 14, 2007
485
21
18,785
Actually all this recent development is quite frankly insulting to gamers and the customer base.

Why? Hmm well let's see. Game development has been moribund and lackluster for quite a few years. Performance improvements have been meager at best. Then out of nowhere one breaks loose from the pack with something 'new' and all of a sudden everyone is pulling 'near the metal performance' boosts out of their ass like crazy.

They have known about this stuff all along but were just happy to keep the status quo. Lazy!

Oh and as a rule take any performance figures from a company marketing slide and reduce them by at least 50% to get the real world figure.
 

gaborbarla

Distinguished
I find it extremely unprofessional to distort the second graph of this article.Tomshardware should be better than this. The second graph looks like a 40% improvement when in reality if you check the scales they just cut the bottom of the graph off.
 

silverblue

Distinguished
Jul 22, 2009
1,199
4
19,285
There appears to be no information on the type of R9 290X used here, which is particularly telling considering the significant performance gains achieved by just having a decent cooler on the thing.Also, the gaping difference between synthetic and actual gaming performance rears its ugly head yet again. I'd consider 55 and 58fps a wash in such circumstances, and would certainly prefer the 55 if, amusingly, NVIDIA are the ones suffering from the less consistent performance this time around.Once the benchmarks are out and the image quality scrutinised, we can finally see the fruits of NVIDIA's labour, but until then...
 

InvalidError

Titan
Moderator

If you look at persistent buffer presentations for OGL and DX12, most of the API, driver, kernel, etc. overheads from having to transfer and re-validate data on every call is eliminated while the remaining user-mode API ends up spreading much more evenly across threads/cores instead of piling up almost entirely in the main thread.

Nvidia said they could achieve up to 30X as many draw calls per second thanks to persistent buffers and related tweaks, which is three times AMD's claim with Mantle.
 

keyrock

Distinguished
Jul 20, 2011
10
0
18,520
As always, I'm weary of any manufacturer published benchmarks, and that goes for Nvidia, AMD, Intel, etc. Let's say, for the sake of argument, that these gains are real, are we talking strictly DX applications here or are there OpenGL gains too?
 

Jaroslav Jandek

Honorable
Jan 13, 2014
103
0
10,680
Anyone capable of reading can see the FPS clearly. With a 0-60 FPS graph, you would barely see the 2/4 fps difference between the driver versions - that is the real reason why it is done this way.

You have no idea what it means to optimize every game, driver and API. If it was as easy as you probably think it is, we would have done so with every game. If we actually had to fully optimize every single piece of software, you would have no games to play - most game development studios would simply go bankrupt before releasing a game...

If you actually check the driver history, you would see each version improving performance in games (this is an evolution, not a revolution). Mantle simply encouraged the public (and PR departments) to talk about it more (which is a good thing).
 

johnvand

Honorable
Sep 30, 2012
3
0
10,510
That Thief chart commits a huge visualization crime. By starting the Y axis at 48 instead of zero, it distorts the magnitude of the gains. If the best thing you have to show off is a 4 FPS gain, and you need to use tricks to make it appear more substantial, I'm not interested.
 

gaborbarla

Distinguished


I don't think so, I think they do it this way because 95% of people don't read the article or just glance at the chart and it looks favourable to Nvidia so they do it. The whole point of the comparison would be to see that in the second graph there is only 8-10% difference as opposed to the larger theoretical gap in the first graph. At first glance however, it seems that the practical improvement is higher, until you actually eyeball the scale of the second graph has been truncated, which is not a valid scientific representation anyways. The point of a scientific comparison should be to visually demonstrate that there is a small improvement not to zoom in on the little improvement and make it look good for Nvidia. Having said this I have a 780ti and I am not an AMD fanboi.

 

burkhartmj

Honorable
Aug 31, 2012
111
0
10,680


Because that was in 2011. The market can change a lot in 3 years. Do you think companies should just ignore market and technology shifts and stick with assertions they made years ago that might not be the case anymore?
 

bloodroses75

Distinguished
Aug 10, 2009
186
0
18,710
I'll believe it when I actually see it. As others have mentioned, this isn't an apples to apples comparison. Also, as any programmer knows, there is much more overhead in Direct X than going direct to metal. Under fully optimized conditions Direct X will always be slower.
 

abimocorde

Honorable
Sep 23, 2013
23
0
10,510
I don't buy it. The 780Ti is a faster and more expensive piece of hardware than a 290x so it has an advantage without drivers. Also, Mantle is new and not fully unlocked in its potential yet. More gains will come of it. I think this is Nvidia trying to dodge the fact that they are late to the low level API party which needs to happen. Hopefully DirectX12 can help them out there. PCs NEED low level APIs to unbottleneck the CPUs regardless of vendor preference.
What Nvidia don't get, is that the main purpose of Mantle, is to alow developers to create better use of the GPU from apps, and leave CPU alone in a Open Source type of way. How open is nvidia driver in the graphic? "So, lets keep everything closed so we can keep make things our way, mkay?"
 


If AMD were to fully support DX11 in their drivers then there may not have been any need for Mantle, did you ever think of that?
 

InvalidError

Titan
Moderator

You meant DX12, right?

Sure. But DX12's public release is still over one year off in the future unless Microsoft changes their mind about it.
 


No I mean DX11, the Star Swarm demo proved it quite nicely and AMD have admitted to not supporting the multithreading feature of DX11 in their drivers whilst Nvidia DO support it and in the Star Swarm demo it makes quite a difference.
 
Status
Not open for further replies.