AMD Responds To Nvidia GPP Controversy

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

kinggremlin

Distinguished
Jul 14, 2009
574
41
19,010


Again, as the program has been described, there is nothing that could be deemed illegal. Anyone who has claimed it is, has yet to bring forth any actual example how it is breaking antitrust laws.

Changing the name on the box does not cost companies money. Take a look at this generic search for "ROG advertisement"

https://www.google.com/search?q=ROG+advertisement&tbm=isch&tbo=u&source=univ&sa=X&ved=0ahUKEwiip4DUoMXaAhVKmK0KHboUBFQQsAQISw&biw=2556&bih=2042#imgrc=BmpHiZIpiyNZ6M:

Tell me how many of the search results contain an ad for both AMD and Nvidia products in the same ad. You're not going to find one. These companies have to spend money coming up with separate ads for each company's products anyway, so no, this move by Nvidia is not costing companies any additional money or changing the way they market. And since Nvidia is sharing advertising costs with the companies that agree, its probably costing them less overall.
 

blppt

Distinguished
Jun 6, 2008
576
92
19,060
"Likewise, you can't say that Gameworks isn't doing something similar. "

Actually, unless you are referring to Physx, you *can* say that. AFAIK, unlike the Intel compiler fiasco, it hasn't been proven that Nvidia's GW libraries intentionally cripple performance when they detect running on AMD hardware. AMD's software and hardware until now hasn't been able to handle heavy tessellation as well as Nvidia's, so yes, I *guess* you could say exploiting that advantage cripples AMD performance, but other than PhysX and TXAA, you can run all of it on AMD hardware.
 

blppt

Distinguished
Jun 6, 2008
576
92
19,060
"But it is a problem, and as long as Nvidia actively "improves" and expands Gameworks, it will remain a problem. "

First of all, your condescending sarcasm aside, half of the things listed on that page such as ray tracing, havent even been used in a game yet, and the truly crippling performance options are usually able to be disabled in game.

Secondly, if AMD has no way to stop Gameworks from being developed and pushed on devs, they need to get their own tools up to snuff, and provide the similar funding to push to devs. You act like AMD is this all-giving and honorable company, when the only reason they push open technology is because being the trailer means they cannot take a stance like Nvidia.

Even before Gameworks, AMD was very lax with driver optimization for games, but being a die hard AMD fanboy, that has probably faded from your memory.

And I maintain---the issue with AMD's deficits with Gameworks has always been due to the saturation of Tessellation involved in those libraries. Maybe there are some other tweaks that favor Nvidia but by and large, simply lowering the tess level in the control panel has closed the gap considerably.
 

blppt

Distinguished
Jun 6, 2008
576
92
19,060
"Eh, sure sounded like it. "

If I gave that impression in my words, I apologize, but it would be rather stupid of me to actually have meant that since one of the primary reasons I ditched my trusty old dual 290x setup was that the new gen AMD cards were supposed to be much better at handling tessellation.
 

bit_user

Polypheme
Ambassador

Thanks for that. I wouldn't have written such a long post w/ details & references, except this
they cannot keep designing cards and develop drivers in a pure vacuum undriven by the market.
sounded to me like you were saying they weren't trying to boost geometry and tessellation performance (and successfully, as I think I've shown). Looks like they haven't quite caught up, but you can see the gap has narrowed.

IMO, AMD's biggest problem is that GCN is not aging well. It has a certain elegance, but it also leaves a lot of performance on the table, in the way that it ties threads to CUs and has only one scalar unit per CU.
 
I think GCN is aging just fine. It was designed to be very good at compute, and in that regard, it's quite competent. It's not AMD's fault that isn't where the general gaming market went. It's still useful for business however. The Ethereum and other crypto mining crowd don't seem to think there is anything wrong with the GCN architecture.

Tessellation only exists as a problem because NVIDIA made it a point to overuse the feature to exacerbate an advantage their cards have in that regard. If game designers had seen the need to use such gross amounts of tessellation otherwise, AMD would likely have put in more than enough to deal with it competently at the levels the market was pushing. ATI had the feature long before NVIDIA ever did, and even Matrox had the feature in silicon before NVIDIA decided to come along and abuse the technology, and this was long before Microsoft decided it would make a nice addition to their DirectX API.
 

bit_user

Polypheme
Ambassador

The irony of your statement is that the crypto coins that run well on AMD are actually memory bandwidth-limited - not compute-bound. Also, Nvidia currently dominates GPU-compute. Not that I'm exactly happy about it, but that's the reality.

Anyway, I'm not going to debate the point, but I've read all of their architecture whitepapers (not just Anandtech's articles) and it's not with idle speculation that I say GCN has utilization problems. Spiritually, it shares certain similarities with Bulldozer, except the other way around (multiple vector floating-point units coupled to a single decoder, branch, integer, and scalar unit).


I'm not convinced. If you look at what happens to geometry sizes without tessellation, it becomes untenable to deliver the current level of detail. Tessellation also works well with LOD control and adapting to lower-performance hardware. IMO, it's the key reasons why PCIe speeds stopped becoming a major bottleneck for GPUs.


That's going so far back into history as to be completely irrelevant. But I'll do you one better: the NV1 had quadric patches back in 1995 - 6 month before ATI even launched the 3D Rage.
 
I'm not sure we are disagreeing about anything here. I'm not saying tessellation is a problem. I think tessellation is a great technology, and I agree that it went a long way in sorting out the bandwidth problem that occurs when the CPU has too much mesh data to send to the GPU. I wished the technology was put to use more back in 2001 - 2002 when manufacturers first implemented hardware support for it. Unfortunately we had to wait for the primary Windows gaming API to incorporate it before it became something that most game devs wanted to spend time on.

What I was trying to say is that tessellation becomes a problem when it is abused. There is certainly a point that can be made about diminishing returns from tessellation. I've read there is little to be gained as far as visuals are concerned, moving beyond about 8x, but I'm sure the precise level can be argued, and under which circumstances, especially when it's been reported that graphic anomalies appear when tessellation levels are set too low for certain GW titles.

My point about ATI having tessellation in 2001 is that they have had plenty of time to work out how to do it correctly. Tessellation works perfectly fine on AMD hardware at this point, and pretty much always has. The overwhelming cost to performance comes when software pushes the tessellation factor beyond what the underlying hardware was designed to work with. This is why AMD has the feature in their drivers to clamp tessellation to 16x, but you hardly sound like you need any of that explained.

I would say, a feature that is still in use today is a teeny tiny bit more relevant than say, a feature which was never used outside of it's own proprietary ecosystem, and hasn't been in silicon for many many years.

Oh, c'mon now, I'm not about to argue that GCN is a perfect architecture, but it certainly has life left in it, and is a great general purpose architecture. The consumer Vega cards are hashing plenty of currencies faster than the consumer NVIDIA cards.

I have no problems admitting that something is definitely wrong when AMD can throw an overwhelming amount more shaders at the problem of rendering graphics for games, and not even see what amounts to a proportional gain in performance, despite the gain in power consumption. If AMD would ever hit on the magic formula, GCN could end up a near perfect architecture. Until then, while I think it's aging fine, and you disagree, it's what AMD has, and what they're working with. It's current iteration is quite efficient too, when not pushed to clocks beyond where it scales well. If AMD can get lower clocked versions, but combine multiples of the them in one package, something similar to what they have done with EPYC, they may be a lot closer to the right combination. Power consumption could be kept in check a lot better on lower clocked chips. They would still have to figure out how to keep such a wide GPU fed, however.

Even if AMD changed to a new architecture, they still have to hit the right formula for resources, and still need to keep the thing fed. I'm not sure that forcing them to redesign the basic building blocks of their GPU from scratch is going to help them in that regard.
 

bit_user

Polypheme
Ambassador

The point of diminishing returns for tessellation factor depends on how the 3D models are designed, what geometry shaders you're running on the tessellation output, and what resolution you're rendering at.


BTW, AMD actually added a hardware-level discard for tessellation output that's not visible. I think that's what they refer to as their primitive discard accelerator.


Their new architecture is due in the generation after Navi. Before then, we get Vega+ @ 7 nm (announcing soon; reaching consumers in early 2019) and obviously Navi.
 

King_V

Illustrious
Ambassador
Drivers? Really?

Ok, look, I get it, I have a sample size of a small handful of people - myself, and two friends. I remember them both having issues with their laptops where the drive would quit and then restart, and an alert would show up telling you that's what happened.

I was baffled by this, as I'd never encountered anything like it. I have never had a laptop, but always desktops, and yes, I'd skip generations, but...

I currently do have a GTX 1080 which I got within the past 3 months. Until that point, the last Nvidia-based card I had was a 4MB PCI Riva 128. Wait, and a secondary system that had an Nvidia motherboard (Q9550 CPU), but an ATI video card. I tried to put a GTX 660Ti in it, and it was bizarrely unstable, in NON-gaming, with the driver quitting constantly. However, that same video card with the same driver version worked fine in my son's Sandy Bridge system.

That was the one exception - but, for my main systems, since that Riva 128, it's been all ATI/AMD. I've never had issues with driver stability in all that time. I only got the GTX 1080 to replace my R9 285 because the 285 couldn't handle gaming on my 3840x1600 monitor.

So, I guess I find all the complaints about how horribly unstable ATI/AMD's drivers are to be surreal.
 

alextheblue

Distinguished
Never said anything remotely like that.


Not a die hard AMD fanboy, Nvidia has the fastest graphics card on the market. They have some amazing hardware. I just don't care for Nvidia's behavior and proprietary tech.

You can maintain the belief that the performance deficit caused by Gameworks is tessellation tech from years ago all you want. But in newer Gameworks titles there's additional tech they're using, and even the tessellation code isn't static. You don't know what devs use what, but at least now you see how much STUFF is in GW. You even mentioned one earlier, God rays. Whether all these fancy features can be disabled or not isn't relevant if the game kicks these features on by default on at various settings (high, very high, etc). Thus many testers bench with these features on whether or not they realize it (or care).

You can also blame them for not properly optimizing for a thing that's almost impossible to optimize for (AMD can neither see nor alter Gameworks code - that's why it's a black box). But even if "perfect" optimization within the driver was ever possible, it is STILL only one piece of the puzzle. If the code being fed to the driver isn't also optimized for your architecture (hint: it's not) then part of the performance equation is quite literally out of your hands. There's nothing they can do to change the code their driver is being fed, including Gameworks. That is what you refuse to understand.

As far as competing with Nvidia by throwing equal cash around, you're right that would be swell. But they just don't have the money. Look at their graphics budgets. Sorry, Nvidia is the bigger fish.
 
Just check one of the few games that call upon AMD features when the driver is pretty much out of the equation : Doom 2016 (in Vulkan) and Wolfenstein New Colossus. Vega gives Pascal a run for its money there, where gameworks is out of the equation and the display driver's overhead (and thus, actual game optimization to a proprietary feature) actually goes AMD's way... Now, just imagine that more games start making use of the features: your shiny 1080 suddenly becomes so last year
 

Eaglecreeker

Reputable
Oct 1, 2015
6
0
4,510
Microsoft has been discreetly pushing Windows 10 on hardware manufacturers in similar sort of way as far as strongarm tactics are concerned for awhile already,,yet not alot was made out of it due to lack of competition,,not that there isnt any other os options but really competitively speaking there isn't,,but this has put this gamer into looking back to team red even at a loss in performance if only to say its a pretty lame move to pull when you already own market share,,in fact why some suit even felt compelled to bring such shenanigans to the table to begin with makes me wonder whats in the kool.aid at nvidia,,,fans of either product should be more involved with promoting competition than they are imo,, helping one team over all others only creates higher pricing and less development , pc enthusiast have had that shining example with intel for several years, so its not like gamers shouldnt have seen this from the get go,,but it really disappoints me to see these partners folding so quickly that it appears this might have been even more than first reported.
 
Status
Not open for further replies.