AMD Vega MegaThread! FAQ and Resources

Page 28 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Personally i think majority of game developer will stick with high level API. This low level API thing is quite complicated on pc hardware. Even mantle that tweaked specifically for AMD hardware still have it's own issue back then. Now they want low level API that will work on various hardware that's not going to end well. At least on DirectX side of things. Vulkan have specific feature making the API is better suited to low level API than directx but the feature can be also a reason why developer want to avoid vulkan. And there are already effort to make Vulkan more high level like OpenGL because not everyone wilingto deal with the complexity of low level. Remember PC is not console when dealing with low level API.

You said like it or not developer have to deal with it. But to me that was not much of an issue. The biggest issue is still about resource. Publisher will not going to spend for example 50% more resource just so they can gain mere 10% more performance. To me resource becoming one of the biggest obstacle for many developer to truly develop games the way DX12/Vulkan was supposed to be. Truthfully i'm not surprise if DX13 will end up being all about high level API once more.
 
The low level APIs are not harder than programming in C or C++, which is basically the 2 langs 99% of the Windows stuff is written in.

Managing resources for hardware has been a task any decent developer has to think of and design for, even with high level coding langs (scripting scum are 4rd class citizens) and APIs.

Cheers!
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985


Even Intel's minimum core count has increased.
The core wars are here, it's all about ,multi-treading and Async Compute. Parallel Processing is not a fad, core counts are increasing exponentially and the software is adapting. I think Dx13 might just achieve the same goal more easily and or more efficiently. The day's of one heavily laden thread an three barely active ones are almost gone. Thank god about time it feels like were making some progress.
 


What do you want to bet MS will skip over 13 and go to 14, just like no hotels have a 13th floor, they name it 14.

We'll have to just wait and see how far the "core wars" go, but I have a feeling that after 7nm, we will see a shift to quantum computing and a whole new processor revolution. Intel said themselves that the shrinks are getting down to trying to make a transistor from a single atom. Coming time to move to the next tech, whatever that should be. Quantum processors, nanotubes, whatever...
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985


haha.. I wouldn't be surprised...

That would be deadly quantum computers... I hope it is that close !
 
If the computing model/paradigm changes from transistors (0's and 1's), then it will be very disruptive to the whole industry. The adoption will be slow, since big corporations would have to, basically, migrate systems from one full-stack vertical paradigm to another. Migration projects are expensive as hell and companies rarely go through the cost issues involved unless absolutely necessary and/or threatened by market conditions or laws.

My point is, even if we can't go to a smaller transistor, they will be still used for many many years to come until the "migration" issue is solved and becomes cheap enough to justify investment into it. It's sad, really, but that is reality.

That being said, Quantum computing is shifting from deterministic 0's and 1's to probabilistic (in reality, but can be reduced to be deterministic) [0,1] "range" (it's a series, really) superposition. So, if you change the common nomenclature of storing and retrieving data and calculating values, the whole (current) computing model gets thrown to the garbage bin. Mathematically speaking, you can go from one to the other, where, as you will all assume correctly, going from Q to regular 0-1 is easier than the other way around. With that we might be able to accommodate a "transition" period until Quantum computers can make full use of the new paradigm. Programming langs might not necessarily change (high level ones at least), but they will need modifications at the very least (I'd think).

Cheers!
 


the programming itself might not that hard to do but when it comes to optimization it was different story. low level API give more work to developer but what's the point doing more work if you can't get more performance from it? many developer have said the CPU portion is easy but not on the GPU portion. right now the issue with DX11 is the API is focusing most of the work on single core. MS only need to fix that issue. they don't need to increase game developer burden to optimize everything for themselves. let the GPU optimization being done by the one that knows the hardware the most: GPU maker themselves.
 


almost gone? lol no. just look at Forza 7. Turn 10 said they purposely hammer one core to reduce input latency as low as possible. that is pure DX12 game to boot. try to look at reality more. some games can benefit from more cores (like strategy games) but there will be others that will not going to benefit much from more cores due to it's design. four to six core will be plenty to majority of games for a very long time.
 


Sadly, I must agree with you. However, I don't think the 4, 6 and 8 core CPU's will be around as long as the 2 and 4 core generations. AMD all but forced Intel to finally raise their core counts, but the question is whether Intel will try to idle at them or push them further. AMD showed Intel that while power efficiency is great, people are really looking for more power and thus more cores. To an extent, developers have kept threading low because Intel wasn't moving forward. We can only hope that with Intel moving to 6 cores that developers will be more open to threading, or at least the ones that can.

@Yuka - Adoption of the next tech, like quantum computing will be slow but there are some companies will to drop major money on such things, so long as the return is big and they can get a decent deal for the machines. I know of a company that just dropped 1 billion on data center upgrades and it didn't even put a dent in their IT budget.
The key will be compatibility with current software. Many larger companies will create their own software or buy some and tweak the crap out of it, so they will not want to do that work all over again and in the mean time their shiny new tech is just sitting there like a paperweight. There would have to be a clear and efficient migration path for the software.
 


There's a term you might have heard that will explain why is that: CAPEX.

When your core business relies heavily on infrastructure, you will upgrade to the next best thing as soon as it is available since it affects your bottom line directly. Would you be able to justify such an investment to (for example) world-wide banks with monstrous infrastructure? That would be a tough sell. If you can demonstrate it will affect (positively) their bottom line, they will do it, otherwise they'll keep those systems until they can no longer keep them or upgrading becomes cheaper. There's a reason why "mainframes" still exist in this day and age.

But hey, too much off topic; for all I'd love to keep on talking this, I think it's enough, haha.

Cheers!
 


Agreed, too much off topic, almost forgot it was a Vega thread. But, on a somewhat related note, I'm hoping that the process shrink, even though doesn't seem like much, might help AMD with their power draw. Not sure how much difference it will make from 14nm to 12nm, but apparently it's enough for AMD to see some kind of value in it.

With Polaris in a rather good place Performance per watt, I'm not sure how well Vega 11 will do replacing it. However, I have a feeling that we have seen all that Polaris has to offer and Vega will be the whole portfolio until Navi.

Do we know if the next iteration of Vega is just the process shrink or are we looking at some (hopefully significant) optimizations?
 
I think when it comes to AMD GPU doing something with the architecture is more important than node shrink at the moment. Just look at Nvidia volta. Even with "12nm" they still make changes to the architecture and not rely solely on the node enhancement to increase their architecture efficiency. GV100 is a lot bigger than GP100 but power consumption wise they are the same (rated at 300w).
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985
It seems Nvidia did have a game ready driver for forza 7 at launch... but it wasn't as ready as their new game ready driver which is so much more ready, it's very very ready it seems. Boasting 15 - 20 percent improvement over their last, eh not so ready, game ready driver !
Anyhow their new driver has never been so ready, until the next one I guess :)

Here's new video from AdoredTV:
https://youtu.be/u5l93xEbN9Q
 
Also the new 387 driver branch have new support for DX12 related stuff (SM6). Not sure if that have effect on Forza 7 or not. But from the various discussion about the issue with Forza 7 it might have to do with Turn 10 "put as much work as they can on single core" hence 1080ti starts pulling ahead once the bottleneck was shifted towards gpu (4k).
 

jaymc

Distinguished
Dec 7, 2007
614
9
18,985


Big time, I can't imagine the amount of work that goes into it... an it's constant, never ending even.. always a new title on the way. After all the study an skill they acquired, that job must be like a living hell. It would probly be less painful to burn the place to the ground an serve a jail sentence instead. I'd say it's an absolute nightmare. Hopefully they get rotated in an out of the role.
 


Well, he definitely gets a pass from everyone for that statement. Most of the current "driver optimization" is just identifying what OGL/DX operations the games use a lot and optimize paths for them in the drivers. With DX12 and Vulkan, supposedly, that burden is shifting to the game developers instead. Good and bad, double edged sword, whatever you wanna call it. It's just like going back to programming in C using pointers and malloc, or assembler directly (not quite, but you get the point).

Game Developers just need to step up their game (pun might be intended) for their graphical engines of choice. I would imagine, Epic, ID and other "game engine" providers are actually happy with low level APIs, since they can deliver and ensure a more consistent experience across hardware, since the ball will be in their court. Higher development costs might come in hand with it as well.

It's a nice can of worms, haha.

Cheers!
 
He was saying that in the context of game development, and not specifically referring to himself.

Basically the point was that it's better for the developer to keep control than letting a driver do what it thinks is best in a given situation. Because the game dev will be able to analyze what is actually happening, and optimize things accordingly.

He wasn't ruling out a driver developer doing the same kind of analysis and making similar changes through the driver.

So the site has changed his statement from saying game developers know more about their code than a generic GPU driver without game-specific optimizations, to saying he personally is more competent than GPU driver developers.

Clickbait sucks.
 


A couple of models have turned up on some of the YouTube tech channels for review I believe- although I don't think they are out yet...
 


VCZ make some comparison between radeon and geforce custom card in their latest 1070ti article. right now there is 40 custom model confirmed for GTX1070Ti at launch. in comparison there is only 6 known custom design for RX Vega (including MSI Air Boost edition that replace AMD reference blower design to MSI own blower design). WhyCry said he was tired asking board partner about it. even said that maybe even board partner themselves are tired about this "custom Vega"

https://videocardz.com/73742/nvidia-geforce-gtx-1070-ti-custom-models-and-overclocking
 
How come RX Vega drops frames so much in some games? I heard both drop below 60fps in AC Origins at 1080p? Watching comparison videos they drop frames a lot. One video showed Vega [strike]64 [/strike] 56 at same performance as Fury X (Crysis 3).
 
because Vega GPU clock fluctuate all the time? that could be the reason. there are many videos comparing Vega with nvidia 1080/1080ti one thing that i notice is Vega clock jumping around all the time even when the temperature are not that high (like below 60c). some people said it could be throttling due to poor cooling with reference blower but even at such low temp the clock still behave like that. where did you see Vega only perform on the same level as Fury X in C3? also what res it it? though to certain extend i do think Vega might have similar issue that Fiji have.
 
Status
Not open for further replies.