Ask Me Anything - Official AMD Radeon Representatives

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Thank you so very much for taking time to answer so many questions. I, for one, am very excited to read more answers as the day progresses. I have a few questions that may be pushing what it means to ask a Radeon specific questions - but as they say; nothing ventured, nothing gained! 🙂 I would like to preface my questions by stating that I am a huge ATi/Radeon fan.


1. Back in 2008, AMD put fourth a tech demo simply labeled "Cinema 2.0" which starred Ruby. This along with the current (and very ugly) Ruby demo have yet to be released as its predecessors had in the past. Is there any way we could get our hands on these demos much in the same way we once could "Double Cross", "The Assassin", "Dangerous Curves" and "Whiteout" - or are the days of tech demos nothing more than topics for nostalgia trips?

2. As a former fan of the ATi brand and a proud patron of Radeon, I find it difficult to display my pride. Back when ATi used to make it's own brand of cards they would often include such bobbles as t-shirts, cd's, mouse pads and most important of all; case stickers. Today's Radeon AIB partners often do not include such trinkets - and when they do they plaster there own brand over such items (read: Sapphire). Where can we get a hold of such stickers as the "Gaming Evolved", "Radeon Graphics" and other such AMD related stickers? I do not think liking AMD is something to be ashamed of and I would proudly show off what I have when I go to Lan Parties... if only I could.

3. The inner geek in me would love to know (straight from the horses mouth) why AMD never took NVIDIA up on its offer to implement PhysX. As far as I can tell either NVIDIA never actually offered or they did but you must have had a good reason to turn them down. It is often debated on what that reason could have been but I have yet to come across an answer directly from someone who works at AMD.


I look forward to AMD's bright future. I hope to make it out to San Jose for AMD FAN DAY next week! 🙂
 


HAHAHA_..NO.gif


I'm sorry, I can't help myself. I'm calling this one out.

This card card does not run fine at 95c, at least not with stock cooling. 95 is neither "optimal" nor "ideal". The reviews that Tomshardware has done clearly disproves this. Even with this very high thermal limit, and with its fan at its set maximum (40% or 55% respectively), the card still has to throttle its clock rate from it's alleged "stock" level just to stop itself from overheating.

290x-clock-rate.png


As you can see here, after just a few minutes, the strain on the card becomes so much that the voltage and clock rate have to lower themselves to stop the card exceeding 95c. This is not what a card managing its thermal limit and "using every watt" does. This looks very much like a case of over-ambitiously trying to keep the card at levels that it just can't sustain in real-world use.

I'm sure powertune is cool technology and all, but please don't try and pull the wool over our eyes by pretending that the R9-290X's stock cooling works fine and that it was "always meant" to run at its maximum temperature. Because it really doesn't. Frankly, it wouldn't hurt to admit failure once in a while. I'd respect it a lot more if you said "yeah, the R9-290X at stock isn't exactly all we'd hoped, but the idea behind it is solid" instead of insisting that it's fine and optimal when there are massive problems.

arma-fr.png
 
Gonna make this short for u guys, but it is really critical question.

1. Radeon 7790 is the same Bonaire Chip with Radeon 260x, Has there any plans for AMD to "Enable" the TrueAudio for Radeon 7790 user? It does not make any sense to disable this hardware function

2. The APU dual graphic (APU + discrete) compatibility & reliability has been a hit or miss, it does not work as well as a discrete crossfire. Can u guys fix this on Kaveri so it works on 95% of the games out there?

3. With crossfire bridge gone in 290/290X, Does AMD has any plan to allow more than 2x crossfire for lower end segment in future Radeon?

4. Will Mantle have Linux/Steam OS port? (if this question is answer in APU summit I have love to get answer from u guys here)

5. Does Kaveri allow further hUMA on over GCN discrete graphic? If it dont any plan to do this on future Radeons (if this question is answer in APU summit I have love to get answer from u guys here)
 
Is AMD putting more effort into Software than before, and if yes how?

Do AMD even listen to its fans/customers, and if so, why hasn't the Heat-sinks/Fans been upgraded? Are they really that expensive and couldn't AMD charge for the extra R&D and materials etc. Do AMD not realize noisy cards are not the norm any more.

Thanks
 
Just a few Questions :

1 : Have the time between each release of new Gpu's changed? Ie from 7xxx series to 2xx series and so on?

2 : It seems that this Generations Gpu's are increasing in power usage , is this a trend or will next generation (Top of the class cards) see a lower power usage than todays 290x 290 cards?

3 : What games are you playing lately and enjoy? Any Dwarf Fortress players among you?
 


The 290X is beating the 780 and Titan at everything but the 1080p, which is to be expected since its throttling. That will be solved with better coolers.
 
What plans are there to allow for cross-firing an APU with a dedicated graphics card? Will we ever be able to crossfire something on the performance level of an HD 7850 or 7870 (or even higher) with an APU (in particular now that the crossfire bus goes via the PCI-E interface)?

That would be a very tempting proposition for a gaming rig on a budget.
 
Ok went over and didn't see this question so gonna ask it. I'm an avid 3D gamer and currently am forced to use NVidia due to they're support for stereoscopic gaming. Does AMD Radeon have any plans for a native implementation of this technology? The current 3rd party solution is inadequate and I'd absolutely love to see a native implementation of this. I know it's a very niche market but it's a niche market that tends to spend ridiculous amounts of money on they're rigs (2x780's for me, the AMD solution would be dual CF R290x's).
 
Will there be a fix for the performance of dual graphics with an APU in a future driver update? I'm thinking of this because of this article: http://www.tomshardware.com/reviews/dual-graphics-crossfire-benchmark,3583-10.html

Also, if a computer manufacturer doesn't specify, how should I determine whether an APU + GPU combination is dual graphics or just uses graphics switching? I was looking for a web page that might specify optimal APU + GPU combinations.

And lastly, can a GPU on a dual-graphics laptop be used by itself, instead of in tandem with the APU's graphics?
 
Okay so I'll try make this question as general as possible. I notice that a lot of people have problems with the Radeon 7690m XT graphics card. Mostly because it is a card used with HP computers with switchable graphics. From what I've read, it seems HP and AMD argue over who is supposed to make drivers for it, and it hasn't had a driver update since 2011. Even trying to download new AMD drivers from the AMD website dont work. The only reason this is a problem is because newer games like Sleeping Dogs crash constantly with the error message "Display driver stopped responding and has recovered" which is a common problem.

Do you know if there will ever be a driver update for the card or anything you can recommend doing to fix problems?
 
Two questions for ya:
Is Dual Graphics technology for Llano, trinity, richland gonna be reworked with better drivers or are better apu+gpu configurations gonna be exclusive to kaveri parts?
Are Dual graphics on the rx-200 series better than previous graphics series? Are big driver upgrades coming for them?
 
"Even with a third-party cooling solution, like the Accelero 3 some users have started deploying, the logic of PowerTune will still try to maximize TDP by allowing temperatures to float higher until some other limit is met (voltage, clock, fan RPM, whatever)."

"AMD even listen to its fans/customers, and if so, why hasn't the Heat-sinks/Fans been upgraded? Are they really that expensive and couldn't AMD charge for the extra R&D and materials etc. Do AMD not realize noisy cards are not the norm any more."

Anyone want to buy a noisy cards, with hot temps and fans that are topped at 55%?. Kind of distracts from what appears to be a phenomenal card. Year ago had a 6870 from "X" mfg with dual fans that after the third set of noisy fans/cooler failed, I modded with a 620 Kuhler CPU closed loop water cooler and extra heat sinks for memory. Solved the heat and noise problems and allowed significant overclocking to improve the Passmark 3d by 40% over 6870 averages.

Will AMD ever adjust the holes on the GPU heat sinks or sell plates to allow for easy use of CPU closed loop water coolers to cool the GPU? Performance numbers indicate the 290x would be a beast if just unchained from terrible fans/cooler and if there were fewer fan RMAs the cards reliability would improve.

If enthusiasts can easily replace a CPU and put on aftermarket cooling, why do GPU makers not make it easier for enthusiasts rather than having to use zip ties and paste on cheap heat sinks?

Thanks for being willing to answer questions?
 
Hello, thanks for taking the time to answer questions!

Mobile is playing a huge role for the future. Is there anything planned GPU wise to combat Tegra and Adreno graphics?

I was disappointed to not see many Trinity APU's infiltrate mobile platforms such as Ultrabooks (or something similar) and all of the Hybrid designs (Yoga 2). Will Kabini change this?

Is there anything planned to combat Nvidia's G-Sync/Lightboost? Currently purchased one of those Asus 144hz monitors (today actually) and I hate using modded drivers for things to work with my 7870. Would be nice if all that stuff was all officially.

At the moment, I'm very disappointed in the Xbox One and PS4 capabilities when compared to the PC. Not sure if this has anything to do with Mantle or whatnot, but can we expect to see titles looking better on the consoles in the near future? I know it's early, but trailers I've seen have not left me impressed. Thanks again!
 
?#1 Is AMD working with or going to work with devices such as the Oculus Rift or other virtual reality devices in the near or far future?
?#2 Is AMD thinking about wireless graphics with the prospect of using light waves?
?#3 Will AMD ever clarify which card is best for which resolution? Or is that bad for marketing?

Thanks!
 


They answered #3 already in the earlier part of the AMA, maybe page 2?
 
My question is one of the earliest ones and it dodn't get answered
Anyway, I just want to know what will AMD do to compete with Nvidia Maxwell next year?
Do you guy have a strategy already?
 
Hello AMD Reps,

I'm surprised I haven't seen this been asked yet, and If I missed your reply, I'm sorry. But here it goes:

Question 1: Would I be safe investing into a 990FX motherboard for a SteamRoller CPU or what ever is next in your line up (some suspect you guys may be releasing something else for)? Or is it a dead socket?

Question 2: Could I crossfire a r9 270x with my HD 7870? If so, would I need a crossfire bridge?

Question 3: Would you recommend I go with a secondary GPU for 1440P or sell my HD 7870 and upgrade to a r9 280x?

Question 4: What is the expected increase in performance with MANTLE?

Question 5: What's the point of this thread if you can't get answers on CPUs? Lol, Half the threads on this website want to know what's going down with steamroller so we can prepare our wallets!!!
 
Will MAntle support an official / semi-official GCN Isa Assembler, expose GPU more of what is avaible today? You have things that can only do with Isa Assmebler where it is impossible to do in OpenCL(not to mention dx or directcompute). So that kind of access to GPU is vital sometimes that can boost perf. or accelerate some implementation. Will it have some kind of Isa Assembler directly to GCN Isa commands, or will it compile to AMD_IL?
 

Sorry, i didn't want to reply to someone else's post, because this AMA isn't directed towards me, but:

Q1: There aren't any CPU guys here, probably no point asking that.

Q2: I suppose yes to both, since they're the same card, and XDMA is only supported on GCN 1.1.

Q4: They've already answered, so i guess you'll need to be more specific.

Q5: What is the point of anything when someone else wants something else? Tom's may hold another AMA with the CPU folks, they've said that in this thread before.
 
@ojas: Again, there is so much in this thread. It's hard to read through everyone's posts, this late at night especially. Thank you for answering my questions.

@AMD Rep: Another thing I've wondered, are we going to see R9 290s and 290Xs with non reference design coolers available to the public anytime soon?

That would be the ticket for me! Not voiding my warranty to get decent cooling performance! I'd probably sell my sad little HD 7870 in a instant! lol
 
Hi AMD staff!

Have a few questions (had more, but others have already asked):

1.
What's the minimum guaranteed base clock on the 290 and 290X?

2.
We've seen reports from Tom's Hardware that retail 290X cards are clocking much lower (someone posted a chart on this page above), and even a user on Tech Report claiming much lower clocks than review samples have.

Is this simply because the current PowerTune implementation is heavily dependent on cooling (which will be variable from card to card)?

This issue with the 290X is causing people to be cautious regarding the 290 as well.

3.
In light of (2) and the fact that AnandTech went so far as to recommend AGAINST the 290 due to the noise it made (i think they measured over 55 dBA), wouldn't it have been a better idea to re-do the reference cooler? Maybe make it a dual-fan blower?

4.
Partly because of (2) and (3), doesn't the 290 make the 290X pointless?

5.
Wouldn't it have been a better idea to keep the 290 at a 40% fan limit (and thus be quieter) and allow partner boards to demonstrate Titan-class performance at $425-450?

(originally suggested by cvearl here: http://www.tomshardware.com/forum/id-1868968/amd-radeon-290-review-fast-400-consistent/page-2.html#11867543 )

6.
You're saying Johan Andersson came to you suggesting a close-to-the-mettle API. However, at Nvidia's Montreal event, John Carmack and Tim Sweeney were pretty skeptical about the API, suggesting that it's only good because it'll push Microsoft and OpenGL forward.

Neither wished to see the Glide days repeat. As someone who owns a game and never seen its "true" graphical beauty because it was locked to Voodoo cards, neither do I.

Andersson started working in the post-Glide era, so i'm not sure whether his endorsement means much.

Having said that, here are my Mantle-related questions:

6.a.)
Open? How? It's a low-level API, exclusive to GCN. How's it going to be compatible with Fermi/Kepler/Maxwell etc. or Intel's HD graphics? For that matter, will you be forced to maintain backwards compatibility with GCN in future?

6.b.)
All we know from AMD as yet about Mantle is that it can provide up to 9x more draw calls. Draw calls on their own shouldn't mean too much, if the scenario is GPU bound. You suggest that it'll benefit CPU-bound and multi-GPU configs more (which already have 80%+ scaling).

That said, isn't Mantle more of a Trojan horse for better APU performance, and increased mixed APU-GPU performance? AMD's APUs are in a lot of cases CPU bottle-necked, and the mixed mode performance is barely up to the mark.

6.c.)
All said and done, will Mantle see any greater adoption than GPU accelerated PhysX? At least GPU PhysX is possible on non-Nvidia hardware, should they choose to allow it.

Wouldn't it have been better to release Mantle as various extensions to OpenGL (like Nvidia does), given the gradual rise of *nix gaming systems? And Microsoft's complete disinterest in Windows as a gaming platform...or heck, even in the PC itself.

6.d.)
Developers have said they'll "partner" with you, however the only games with confirmed (eventual) support are BF4 and Star Citizen. Unreal Engine 4 and idTech don't seem to support Mantle, nor do their creators seem inclined to do that in the near future.

Is that going to change? Are devs willing to maintain 5 code paths? It would make sense if they could use Mantle on consoles, but if they can't...

7.
With TSMC's 20nm potentially unavailable till late next year, is AMD considering switching to Intel's 22nm or 14nm for its GPUs? Sounds like heresy, but ATI and Intel weren't competitors.

8.
Regarding G-Sync, what would be easier: Licensing Nvidia's tech and eventually getting them to open it up, or creating an open alternative and asking them to contribute? There is, after all, more excitement about G-Sync than stuff like 4K.

9.
Is AMD planning on making a OpenCL based physics engine for games that could hopefully replace PhysX? Why not integrate it with Havok?

10.
We've seen that despite GCN having exemplary OpenCL performance in synthetic benchmarks, however in real-world tests GCN cards are matched by Nvidia and Intel solutions. What's going on there?

11.
Are the video encoding blocks present in the consoles (PS4, Xbone) also available to GCN 1.1 GPUs?

12.
What is it the official/internal AMD name for GCN 1.1? I believe it was Anand of AnandTech that called it that.

13.
I remember reading that GPU PhysX will be supported on the PS4. Does that mean PhysX support will be added to Catalyst drivers on the PC? Or rather, will Nvidia allow AMD GPUs to run PhysX stuff?

A lot of questions, but I've had them for a long time. Thanks!
 


See? That's a reasonable statement. Not "NO, IT'S PERFECT AND IT WAS ALWAYS MEANT TO RUN AT 95 C NO REALLY GUYS!".

 


They didnt say Mantle is compitable with other hw. They said they are open to co-operate. If it's compiling to some IL then it is possible top run on other hardware too, after they are aggreed to co-operate.


How come GPU Physx is possible on any other hw? It will dependent to that hw developer to implement Physx on their hw (obviously Nvidia wont do that for them). For AMD, AMD will have to implement physx to compile to AMD IL or even to specific hw (like GCN). Again , obviously, Nvidia never intends to help them to port Physx >>> compiler >> PTX implementation to AMD >> GCN or Vliw transition. It is almost impractical to assume GPU Physx will run smoothly as on other hw than nvidia. Same as Cuda too.

Gl extensions for Nvidia doesnt seem to help for windows, mostly developers dont care linux that much(we dont see many titles). No one cares. GL guys must decide which path they are up to, Cad or gaming. Sustaining both goals doesnt seem that much practical.

If new consoles have nuff low-level development platform, even Direct support for GCN Isa Commands, then it is possible to port them PC with much less migration, depending on how Mantle is implemented, if it also support Isa Assembler or similar approach. Even consoles and Mantle arent same in that manner of hi-level approach, it is possible to apply same low-level development path for all of them. Implementing something with Isa Commands on consoles is almost impractical / impossible to accomplish at Dx. Ofc it depends how will a code path will be developed on console and how the Mantle is implemented on GCN. But after the developers start to use GPGPU and more GPU specific features, it will be harder to accomplish same thing on Dx. So Mantle may help in that kind of situations.

How Mantle is developed and how it will be implemented on GCN are completly different questions.

Intel is not allowing anyone to use their founderies. Altera is an exception. IMO it is impossible for Intel to allow AMD to use their founderies for a reasonable price.

Which realworld results are your referring to for OpenCL perf? OpenCL is also kinda hw dependent, you need better kernel optimizations.

I'm almost %100 sure that for PS4 they implied GPU accelerated Physic calcs, not Nvidia Physx on Ps4 GPU.
 
Hello AMD representatives, and thank you for this opportunity. By the way you should do this more often!

I am a bit of a AMD fan because I've always seen AMD as the "Robin Hood" of IT technology, AMD always focused on giving good performance at very affordable prices. Now for the questions:

1) It's obvious that AMD had a great vision that matured into a strategy spanning over several years:
a) first you win all the major consoles out there and by that you ensure that most games will (have to) be optimized for AMD hardware.
b) then you develop Mantle in tight cooperation with (some of) the major game designers out there to solidify your gains.
It is obvious that souch strategy involves commitment of significant resources (especially since it covers both the CPU and GPU side of business). Is this vision from the Dirk Meyer era or has it grown under Rory Read's tenure ?

2) According to at least two reviews that I read on the 290 (from Anandtech AMD center and Tom's Chris Angelini), the reference cards seem to outperform the 290x despite the 4 CU that have been cut down. That's (as I read) due to increase of fan speed to 47% which made both reviewers complain about the noise of the card. This means that the 290x performance gain over the 290 will not justify (if at all) the extra cost. Do you plan to improve the 290x later on ?

3) Any plans to change the referrence stock coolers or alternatively to offer a premium option for cooling (for example: water cooling as was the case with FX-9590)

4) If I remember correctly 7990 appeared very late (1 year later than single chip options). Is there a 299X (dual chip sollution) in the works ?

5) I hear a lot about advantages that general processing will have for bringing the GPU closer to the CPU (parallel workloads can be executed more efficiently by a graphics card) but is there any advantage for the GPU in having this close integration with the CPU (workloads that can be easier delegated from the GPU to the CPU) if yes please give some examples..

6) Are there any plans for developing a Radeon GPU specific for the mobile (mobile phones, tablets, smart wearables) segment ?

7) Will there be a GCN 1.2, 2.0 or are you already working on a future architecture ?

8) Steam has a huge number of subscribers and definitely has a working model that can rival that of gaming consoles. The fact that they are serious about building a console ecosystem around their service is not to be taken lightly. Nvidia was very quick to rally to Steam in order to counter your design wins for the console market. I know it's been asked before, but do you plan to sit this one out or will we see AMD getting involved in the Steam console(s) project.

9) And the last one: Did you have to make any sacrifices in the GPU architecture in order to ease the unification with the CPU ? If yes please give a few examples, if no please motivate.

Thanks in advance for your responses, and keep up the good work! :)
I hope that the editors from Tom's will create a nice "front page" article from all the information you gave out to us today for all the readers who missed this talks can also.
 
Status
Not open for further replies.

TRENDING THREADS