Ask Me Anything - Official AMD Radeon Representatives

Page 9 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Skysnake

Honorable
Nov 6, 2013
13
0
10,510
Sorry for another question, but it is also a big thing in the last HD7k Gen.

When i reviewed a HD7970 and also all the other reviewers, core voltage was unlocked. Something about 6-8 months after the Launch, the boardpartners begin to sell cards, which was voltage locked. Every! PR folk of the board partners was happy to answer my request, but then after a period of time, they just give as answer, that they not realy can talk about this.

So it was very disappointing for the overclockers, because they were unable to see if the card is voltage unlocker or locked. So many say, ok no AMD GPU, because with voltage lock, because of the far too high stockvoltage, the card is no option. Under-/Overvolting is very important for many people.

So for the R9 290X XFX is promoting "voltage unlocked", but at the moment there is no way to change the GPU-Voltage, or i don't know it. Will we see soon a tool from AMD to change voltage?

And how is it in the next time, will AMD (i think, that it was so) again forbid the board partners to unlock the voltage?
 

bassybekx

Honorable
Oct 1, 2013
134
0
10,690


In what way will mantle affect older AMD cards such as my crossfire HD7850 setup? will I see a significant performance increase? such as higher fps in games? .. sorry if its a noob question :D
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810

The questions were put to AMD, and i think i rather read a direct response from them rather than speculation on your part.

Had this been a normal forum thread, i'd have engaged and responded, however i don't think it's acceptable to clog this thread up unnecessarily.

Hope you understand.
 

COLGeek

Cybernaut
Moderator
This AMA focused on GPUs. Tom's is working to schedule a CPU/APU oriented AMA with AMD in the future based on the demand for info on these topics throughout the thread. Start thinking of all of your questions now.
 

Fetzie

Honorable
Nov 6, 2013
3
0
10,510


All APUs are CPUs. An APU is just a CPU with an integrated graphics processor. Incidentally, while Intel doesn't call their processors "APU"s, if you go by AMD's definition then Intel's Sandy/Ivy Bridge and Haswell CPUs are APUs too, because they include an integrated GPU (with the exception of the i5 2550K, which had a deactivated iGPU).
 

jpishgar

Splendid
Overlord Emeritus
Newp - the AMA is still underway, and will be until 12:00 noon EDT today.
The official representatives from AMD will be answering your questions from the overnight and this morning upon their return. :)
(We like to let our AMA guests sleep and eat - it keeps them happy.)
 


I asked that question #3 on 1st page. Please read all the Q&A before posting.

"The R9 290/290X: 4K
The R9 280: 1440/1600p
The R9 270X:/R7 260X: Max settings 1080p and high settings 1080p

From a hardware perspective, these are our design goals for the products."
 

lithium6

Honorable
Feb 5, 2013
28
0
10,540
Hi there! Are you working with Stanford in getting GPU folding working under Linux? I'd like to migrate everything I can away from Windows while being on the red team.

And regarding your co-operation with EA, will we see the next-gen sports game engine supporting Mantle?
 

Gurg

Distinguished
Mar 13, 2013
515
61
19,070


Don't think that is a best but rather a minimum. The higher cards will generally get higher frame rates, but the chart shows the minimum card for that monitor.

 

Thracks

Honorable
Nov 1, 2013
101
0
10,680

I’m not a developer, so I’m unfortunately unable to intelligently answer that question. What I can say is that we’re unveiling the architecture of the API next week at the AMD Developer Conference, and that may answer more of your question.


1) I’m not on the CPU team, so I don’t know the answer to this question.

2) Yes, but we do not test or qualify such configurations so I cannot guarantee that it will work properly. You would need a CrossFire bridge.

3) I know one GPU versus two is contentious, and always will be, but I think two 7870 GPUs for 1440p will ultimately provide more performance than a single 280X.

4) Stay tuned for the AMD Developer Conference next week. On the 13th, one of the Mantle-supporting game developers will be introducing the first public demonstration with performance figures.

5) Because lots of people have graphics questions, and THG asked us in Hawaii to participate. :)



I’m sorry, I don’t sit in on the meetings that determine the roadmap for partner solutions. I don’t know.



There is no minimum clock, and that is the point of PowerTune. The board can dither clockspeed to any MHz value permitted by the user’s operating environment.

2. We've seen reports from Tom's Hardware that retail 290X cards are clocking much lower (someone posted a chart on this page above), and even a user on Tech Report claiming much lower clocks than review samples have.

Is this simply because the current PowerTune implementation is heavily dependent on cooling (which will be variable from card to card)?
This issue with the 290X is causing people to be cautious regarding the 290 as well.

Plain and simple, THG and Tech Report have faulty boards. You can tell because Sweclockers performed the same retail board test and got the expected results: performance in identical to the AMD-issued samples.

Every 290X should be running 2200 RPM in quiet mode, and every 290 should be running 2650 RPM. We will be releasing a driver today or tomorrow that corrects these rare and underperforming products, wherever they may exist.

3. In light of (2) and the fact that AnandTech went so far as to recommend AGAINST the 290 due to the noise it made (i think they measured over 55 dBA), wouldn't it have been a better idea to re-do the reference cooler? Maybe make it a dual-fan blower?

Having addressed #2, we’re comfortable with the performance of the reference cooler. While the dBa is a hard science, user preference for that “noise” level is completely subjective. Hundreds of reviewers worldwide were comfortable giving both the 290 and 290X the nod, so I take contrary decisions in stride.

4. Partly because of (2) and (3), doesn't the 290 make the 290X pointless?

Hardly! The 290X has uber mode and a better bin for overclocking.

5. Wouldn't it have been a better idea to keep the 290 at a 40% fan limit (and thus be quieter) and allow partner boards to demonstrate Titan-class performance at $425-450?

No, because we’re very happy with every board beating Titan.

6.a.) Open? How? It's a low-level API, exclusive to GCN. How's it going to be compatible with Fermi/Kepler/Maxwell etc. or Intel's HD graphics? For that matter, will you be forced to maintain backwards compatibility with GCN in future?

You’re right, Mantle depends on the Graphics Core Next ISA. We hope that the design principles of Mantle will achieve broader adoption, and we intend to release an SDK in 2014. In the meantime, interest developers can contact us to begin a relationship of collaboration, working on the API together in its formative stages.

As for “backwards compatibility,” I think it’s a given that any graphics API is architected for forward-looking extensibility while being able to support devices of the past. Necessary by design?

6.b.) All we know from AMD as yet about Mantle is that it can provide up to 9x more draw calls. Draw calls on their own shouldn't mean too much, if the scenario is GPU bound. You suggest that it'll benefit CPU-bound and multi-GPU configs more (which already have 80%+ scaling).

That said, isn't Mantle more of a Trojan horse for better APU performance, and increased mixed APU-GPU performance? AMD's APUs are in a lot of cases CPU bottle-necked, and the mixed mode performance is barely up to the mark.

I suggested that it’ll benefit CPU bottlenecking and multi-GPU scaling as examples of what Mantle is capable of. Make no mistake, though, Mantle’s primary goal is to squeeze more performance out of a graphics card than you can otherwise extract today through traditional means.

6.c.) All said and done, will Mantle see any greater adoption than GPU accelerated PhysX? At least GPU PhysX is possible on non-Nvidia hardware, should they choose to allow it.
Wouldn't it have been better to release Mantle as various extensions to OpenGL (like Nvidia does), given the gradual rise of *nix gaming systems? And Microsoft's complete disinterest in Windows as a gaming platform...or heck, even in the PC itself.

It’s impossible to estimate the trajectory of a graphics API compared to a physics library. I think they’re operating on different planes of significance.

I will also say that API extensions are insufficient to achieve what Mantle achieves.

6.d.) Developers have said they'll "partner" with you, however the only games with confirmed (eventual) support are BF4 and Star Citizen. Unreal Engine 4 and idTech don't seem to support Mantle, nor do their creators seem inclined to do that in the near future.
Is that going to change? Are devs willing to maintain 5 code paths? It would make sense if they could use Mantle on consoles, but if they can't...

The work people are doing for consoles is already interoperable, or even reusable, with Mantle when those games come to the PC. People may have missed that it’s not just Battlefield 4 that supports Mantle, it’s the entire Frostbite 3 engine and any game that uses it. In the 6 weeks since its announcement, three more major studios have come to us with interest on Mantle, and the momentum is accelerating.

7. With TSMC's 20nm potentially unavailable till late next year, is AMD considering switching to Intel's 22nm or 14nm for its GPUs? Sounds like heresy, but ATI and Intel weren't competitors.

No.

8.Regarding G-Sync, what would be easier: Licensing Nvidia's tech and eventually getting them to open it up, or creating an open alternative and asking them to contribute? There is, after all, more excitement about G-Sync than stuff like 4K.

We fundamentally disagree that there is more excitement about G-Sync than 4K. As to what would be easier with respect to NVIDIA’s technology, it’s probably best to wait an NVIDIA AMA. :p

9.Is AMD planning on making a OpenCL based physics engine for games that could hopefully replace PhysX? Why not integrate it with Havok?

No, we are not making an OpenCL physics library to replace PhysX. What we are doing is acknowledging that the full dimension of GPU physics can be done with libraries like Havok and Bullet, using OpenCL across the CPU and GPU. We are supporting developers in these endeavors, in whatever shape they take.

10. We've seen that despite GCN having exemplary OpenCL performance in synthetic benchmarks, however in real-world tests GCN cards are matched by Nvidia and Intel solutions. What's going on there?

You would need to show me examples. Compute is very architecturally-dependent, however. F@H has a long and storied history with NVIDIA, so the project understandably runs very well on NVIDIA hardware. Meanwhile, BitCoin runs exceptionally well on our own hardware. This is the power of software optimization, and tuning for one architecture over another. Ceteris paribus, our compute performance is exemplary and should give us the lead in any scenario.

11.Are the video encoding blocks present in the consoles (PS4, Xbone) also available to GCN 1.1 GPUs?

You would have to ask the console companies regarding the architecture of the hardware.

12. What is it the official/internal AMD name for GCN 1.1? I believe it was Anand of AnandTech that called it that.

We do not have an official or internal name. It’s “graphics core next.”

13. I remember reading that GPU PhysX will be supported on the PS4. Does that mean PhysX support will be added to Catalyst drivers on the PC? Or rather, will Nvidia allow AMD GPUs to run PhysX stuff?
A lot of questions, but I've had them for a long time. Thanks![/quotemsg]

No, it means NVIDIA extended the PhysX-on-CPU portion of their library to developers interested in integrating those libraries into console titles.



1) The gaming strategy you’re seeing today is the brainchild of the graphics GM Matt Skynner, along with his executive colleagues at AMD. It comes with the full support of the highest echelons at AMD.

2) I want to reiterate this answer: Plain and simple, THG and Tech Report have faulty boards. You can tell because Sweclockers performed the same retail board test and got the expected results: performance in identical to the AMD-issued samples.

Every 290X should be running 2200 RPM in quiet mode, and every 290 should be running 2650 RPM. We will be releasing a driver today or tomorrow that corrects these rare and underperforming products, wherever they may exist.

3) I don’t sit in on these engineering meetings.

4) I cannot speculate on future products, I’m sorry.

5) I cannot think of any reverse examples, where offloading from the GPU to the CPU would be beneficial.

6) We have no plans to enter the smartphone market, but we’re already in tablets from companies like Vizio with our 4W (or less) APU parts.

7) Graphics Core Next is our basic architecture for the foreseeable future.

8) You will see AMD-powered Steam machines in 2014.

9) No, it’s more about changing the direction of CPU architecture to be harmonious with GPUs. Of course the GPU ISA has to be expanded to include things like unified memory addressing and C++ code execution, but this capability already exists within Graphics Core Next. So, on the GPU side, it’s all about extending the basic capabilities of the GPU, rather than changing the fundamentals to get GPGPU.
 

Fetzie

Honorable
Nov 6, 2013
3
0
10,510
How many game development teams did you have on board for Mantle when it was announced at GPU'13 in Hawaii?
 

Thracks

Honorable
Nov 1, 2013
101
0
10,680

Quoting this for emphasis.



1: This is something for Stanford to undertake, not really something we can “help them” with, as we already provide the necessary tools on our developer portal.

2: Mantle is in the Frostbite 3 engine. EA/Dice have disclosed that the following franchises will soon support Frostbite: Command & Conquer, Mass Effect, Mirror’s Edge, Need for Speed, PvZ, Star Wars, Dragon Age: Inquisition. With respect to unannounced titles, I guess we all have to wait and see what they have in store!



1: Right now I’m playing Tomb Raider, Dishonored, and Chivalry: Medieval Warfare.

2: I don’t remember the name, but it’s a special and easily-applied compound that cures during manufacturing.
 

Sovereign_Pwner

Honorable
Jun 14, 2013
75
0
10,660
I would like to have your thoughts on the price disparity between different countries.
For instance, the R9-290 can't be found for less then 500 US dollars in Australia, but can easily be found in America (Newegg/Amazon) for 400 US dollars. This is also seen in India, where the 7850 still costs around about 360 US dollars.

Also, do price drops issued by AMD apply to everywhere, as even old products are way more expensive in Australia compared to the United States. The 7870 is supposed to be 200 USD, but you'd be hard pressed to find a 7870 for less then 250 USD in Australia. I know some of it is because of Taxing, but it is an exorbitant premium we have to pay (some 300 dollars) more for the exact same products.
 

Thracks

Honorable
Nov 1, 2013
101
0
10,680


We set suggested prices for our GPUs in US dollars. The prices you see in any other country are the product of tax, duty, import, and the strength of a currency compared to US dollars. Once a retailer purchases the board from us, we have absolutely no control over what they do with the product.

I currently live in Canada, and as a nation we are struggling with the same problem on premium electronics. We're a nation of 35 million people, looking longingly across the border to a country of 350,000,000 people and some of the least expensive electronic prices in the world. I come from the US, and it was immediately obvious that Canada's fiscal policies create more expensive electronic products than what I'm accustomed to. I hear about this struggle every day in the news, but I accept (with frustration) that it is a product of the fact that Canada imports everything with higher tax/duty/import fees than the US.
 

Skysnake

Honorable
Nov 6, 2013
13
0
10,510

Then let me help you ;)

MapReduce is something that can profit for the last few reduction steps. When you have lesse than GPU-ALUs x 4 Workunits, you are not able to utalize the whole GPU. With the Computequeues and the ACEs, it is less, but the hard limit ist CU-ALUs x 4 workunits.

So if you have less than this, it would be a idear to test, if the reduction on the CPU is faster or not. This depends, where you need the data next. On the CPU makes it much more likely that it is a profit.

The same for very irregular Workflos. For example you have a nBody simulation with distance groups to exelerate the calculation, you have to check, if the mapping is still ok. For this a CPU should be atm much faster.

There are more than enough situations, where offloading to the CPU might be worth to have a closer look.
 

ojas

Distinguished
Feb 25, 2011
2,924
0
20,810
Thanks for the replies!

Regarding absence of minimum clock:
But doesn't that imply that, given unfavorable thermals (ambient), there's a very real danger of a 290 falling to say, GTX 770 levels of performance?

Quoting Tom's:
With only a handful of data points pegging 290X between GeForce GTX 770 and 780, and quicker than Titan, consistency appears to be AMD’s enemy right now. Company representatives confirm that there's a discrepancy between between absolute fan speed and its PWM controller, and is working to remedy this with a software update. Our German team continued investigating as I peeled off to cover GeForce GTX 780 Ti, and demonstrated that the press and retail cards are spinning at different fan speeds. But there's more to this story relating to ambient conditions, so you'll be hearing more about it soon.

and:
Nvidia seems happy capitalizing on this confusing state of affairs, and is positioning GeForce GTX 780 Ti as the fastest single-GPU board out there…consistently. The degree to which it wins depends on how AMD’s flagship is used. Sometimes the 780 Ti takes a single-digit-percent win; other times it’s 30%+ faster. Whether Nvidia’s advantage is worthwhile depends on what you’d see from your R9 290X.

Regarding compute:
http://www.tomshardware.com/reviews/geforce-gtx-780-ti-review-benchmarks,3663-11.html

As you can see, it's all very back and forth.

and:
photoshop.png

winzip.png
 

htapocysp

Distinguished
Feb 23, 2011
33
0
18,540
So nvidia 780ti partner cooling solutions have already been announced. Hopefully since several people here have asked about r9-290x partner cooling you can bring this up this to your management to make an official announcement before you loose customers.
 

jpishgar

Splendid
Overlord Emeritus
And with that, the AMA is officially concluded!

Major thanks to the AMD representatives who took the time out of their schedule to come and answer all the great questions our community had for them. We know this was a bit of work on their end, and we and our users are deeply appreciative of the time taken to engage with the community here at Tom’s Hardware.

For answering questions, big thanks go out to AMD's Robert "Thracks" Hallock for responding to users and relaying answers! An epic thanks to the AMD team for helping put this together on their end and securing the time and info required to make this happen. We're grateful for all the great answers, and our community really appreciated this opportunity to engage with you.

All, stay tuned for the digest of this AMA, and announcements of future Ask Me Anything features coming up!
-JP
 
Status
Not open for further replies.