kukreknecmi :
Will MAntle support an official / semi-official GCN Isa Assembler, expose GPU more of what is avaible today? You have things that can only do with Isa Assmebler where it is impossible to do in OpenCL(not to mention dx or directcompute). So that kind of access to GPU is vital sometimes that can boost perf. or accelerate some implementation. Will it have some kind of Isa Assembler directly to GCN Isa commands, or will it compile to AMD_IL?
I’m not a developer, so I’m unfortunately unable to intelligently answer that question. What I can say is that we’re unveiling the architecture of the API next week at the AMD Developer Conference, and that may answer more of your question.
ojas :
griptwister :
Hello AMD Reps,
I'm surprised I haven't seen this been asked yet, and If I missed your reply, I'm sorry. But here it goes:
Question 1: Would I be safe investing into a 990FX motherboard for a SteamRoller CPU or what ever is next in your line up (some suspect you guys may be releasing something else for)? Or is it a dead socket?
Question 2: Could I crossfire a r9 270x with my HD 7870? If so, would I need a crossfire bridge?
Question 3: Would you recommend I go with a secondary GPU for 1440P or sell my HD 7870 and upgrade to a r9 280x?
Question 4: What is the expected increase in performance with MANTLE?
Question 5: What's the point of this thread if you can't get answers on CPUs? Lol, Half the threads on this website want to know what's going down with steamroller so we can prepare our wallets!!!
1) I’m not on the CPU team, so I don’t know the answer to this question.
2) Yes, but we do not test or qualify such configurations so I cannot guarantee that it will work properly. You would need a CrossFire bridge.
3) I know one GPU versus two is contentious, and always will be, but I think two 7870 GPUs for 1440p will ultimately provide more performance than a single 280X.
4) Stay tuned for the AMD Developer Conference next week. On the 13th, one of the Mantle-supporting game developers will be introducing the first public demonstration with performance figures.
5) Because lots of people have graphics questions, and THG asked us in Hawaii to participate.
griptwister :
@ojas: Again, there is so much in this thread. It's hard to read through everyone's posts, this late at night especially. Thank you for answering my questions.
@AMD Rep: Another thing I've wondered, are we going to see R9 290s and 290Xs with non reference design coolers available to the public anytime soon?
That would be the ticket for me! Not voiding my warranty to get decent cooling performance! I'd probably sell my sad little HD 7870 in a instant! lol
I’m sorry, I don’t sit in on the meetings that determine the roadmap for partner solutions. I don’t know.
ojas :
Hi AMD staff!
Have a few questions (had more, but others have already asked):
1. What's the minimum guaranteed base clock on the 290 and 290X?
There is no minimum clock, and that is the point of PowerTune. The board can dither clockspeed to any MHz value permitted by the user’s operating environment.
2. We've seen reports from Tom's Hardware that retail 290X cards are clocking much lower (someone posted a chart on this page above), and even a user on Tech Report claiming much lower clocks than review samples have.
Is this simply because the current PowerTune implementation is heavily dependent on cooling (which will be variable from card to card)?
This issue with the 290X is causing people to be cautious regarding the 290 as well.
Plain and simple, THG and Tech Report have faulty boards. You can tell because
Sweclockers performed the same retail board test and got the expected results: performance in identical to the AMD-issued samples.
Every 290X should be running 2200 RPM in quiet mode, and every 290 should be running 2650 RPM. We will be releasing a driver today or tomorrow that corrects these rare and underperforming products, wherever they may exist.
3. In light of (2) and the fact that AnandTech went so far as to recommend AGAINST the 290 due to the noise it made (i think they measured over 55 dBA), wouldn't it have been a better idea to re-do the reference cooler? Maybe make it a dual-fan blower?
Having addressed #2, we’re comfortable with the performance of the reference cooler. While the dBa is a hard science, user preference for that “noise” level is
completely subjective. Hundreds of reviewers worldwide were comfortable giving both the 290 and 290X the nod, so I take contrary decisions in stride.
4. Partly because of (2) and (3), doesn't the 290 make the 290X pointless?
Hardly! The 290X has uber mode and a better bin for overclocking.
5. Wouldn't it have been a better idea to keep the 290 at a 40% fan limit (and thus be quieter) and allow partner boards to demonstrate Titan-class performance at $425-450?
No, because we’re very happy with
every board beating Titan.
6.a.) Open? How? It's a low-level API, exclusive to GCN. How's it going to be compatible with Fermi/Kepler/Maxwell etc. or Intel's HD graphics? For that matter, will you be forced to maintain backwards compatibility with GCN in future?
You’re right, Mantle depends on the Graphics Core Next ISA. We hope that the design principles of Mantle will achieve broader adoption, and we intend to release an SDK in 2014. In the meantime, interest developers can contact us to begin a relationship of collaboration, working on the API together in its formative stages.
As for “backwards compatibility,” I think it’s a given that any graphics API is architected for forward-looking extensibility while being able to support devices of the past. Necessary by design?
6.b.) All we know from AMD as yet about Mantle is that it can provide up to 9x more draw calls. Draw calls on their own shouldn't mean too much, if the scenario is GPU bound. You suggest that it'll benefit CPU-bound and multi-GPU configs more (which already have 80%+ scaling).
That said, isn't Mantle more of a Trojan horse for better APU performance, and increased mixed APU-GPU performance? AMD's APUs are in a lot of cases CPU bottle-necked, and the mixed mode performance is barely up to the mark.
I suggested that it’ll benefit CPU bottlenecking and multi-GPU scaling as examples of what Mantle is capable of. Make no mistake, though, Mantle’s primary goal is to squeeze more performance out of a graphics card than you can otherwise extract today through traditional means.
6.c.) All said and done, will Mantle see any greater adoption than GPU accelerated PhysX? At least GPU PhysX is possible on non-Nvidia hardware, should they choose to allow it.
Wouldn't it have been better to release Mantle as various extensions to OpenGL (like Nvidia does), given the gradual rise of *nix gaming systems? And Microsoft's complete disinterest in Windows as a gaming platform...or heck, even in the PC itself.
It’s impossible to estimate the trajectory of a graphics API compared to a physics library. I think they’re operating on different planes of significance.
I will also say that API extensions are insufficient to achieve what Mantle achieves.
6.d.) Developers have said they'll "partner" with you, however the only games with confirmed (eventual) support are BF4 and Star Citizen. Unreal Engine 4 and idTech don't seem to support Mantle, nor do their creators seem inclined to do that in the near future.
Is that going to change? Are devs willing to maintain 5 code paths? It would make sense if they could use Mantle on consoles, but if they can't...
The work people are doing for consoles is already interoperable, or even reusable, with Mantle when those games come to the PC. People may have missed that it’s not just Battlefield 4 that supports Mantle, it’s the entire Frostbite 3 engine and any game that uses it. In the 6 weeks since its announcement, three more major studios have come to us with interest on Mantle, and the momentum is accelerating.
7. With TSMC's 20nm potentially unavailable till late next year, is AMD considering switching to Intel's 22nm or 14nm for its GPUs? Sounds like heresy, but ATI and Intel weren't competitors.
No.
8.Regarding G-Sync, what would be easier: Licensing Nvidia's tech and eventually getting them to open it up, or creating an open alternative and asking them to contribute? There is, after all, more excitement about G-Sync than stuff like 4K.
We fundamentally disagree that there is more excitement about G-Sync than 4K. As to what would be easier with respect to NVIDIA’s technology, it’s probably best to wait an NVIDIA AMA.
9.Is AMD planning on making a OpenCL based physics engine for games that could hopefully replace PhysX? Why not integrate it with Havok?
No, we are not making an OpenCL physics library to replace PhysX. What we
are doing is acknowledging that the full dimension of GPU physics can be done with libraries like Havok and Bullet, using OpenCL across the CPU and GPU. We are supporting developers in these endeavors, in whatever shape they take.
10. We've seen that despite GCN having exemplary OpenCL performance in synthetic benchmarks, however in real-world tests GCN cards are matched by Nvidia and Intel solutions. What's going on there?
You would need to show me examples. Compute is very architecturally-dependent, however. F@H has a long and storied history with NVIDIA, so the project understandably runs very well on NVIDIA hardware. Meanwhile, BitCoin runs exceptionally well on our own hardware. This is the power of software optimization, and tuning for one architecture over another. Ceteris paribus, our compute performance is exemplary and should give us the lead in any scenario.
11.Are the video encoding blocks present in the consoles (PS4, Xbone) also available to GCN 1.1 GPUs?
You would have to ask the console companies regarding the architecture of the hardware.
12. What is it the official/internal AMD name for GCN 1.1? I believe it was Anand of AnandTech that called it that.
We do not have an official or internal name. It’s “graphics core next.”
13. I remember reading that GPU PhysX will be supported on the PS4. Does that mean PhysX support will be added to Catalyst drivers on the PC? Or rather, will Nvidia allow AMD GPUs to run PhysX stuff?
A lot of questions, but I've had them for a long time. Thanks![/quotemsg]
No, it means NVIDIA extended the PhysX-on-CPU portion of their library to developers interested in integrating those libraries into console titles.
ioanpaulpirau :
Hello AMD representatives, and thank you for this opportunity. By the way you should do this more often!
I am a bit of a AMD fan because I've always seen AMD as the "Robin Hood" of IT technology, AMD always focused on giving good performance at very affordable prices. Now for the questions:
1) It's obvious that AMD had a great vision that matured into a strategy spanning over several years:
a) first you win all the major consoles out there and by that you ensure that most games will (have to) be optimized for AMD hardware.
b) then you develop Mantle in tight cooperation with (some of) the major game designers out there to solidify your gains.
It is obvious that souch strategy involves commitment of significant resources (especially since it covers both the CPU and GPU side of business). Is this vision from the Dirk Meyer era or has it grown under Rory Read's tenure ?
2) According to at least two reviews that I read on the 290 (from Anandtech AMD center and Tom's Chris Angelini), the reference cards seem to outperform the 290x despite the 4 CU that have been cut down. That's (as I read) due to increase of fan speed to 47% which made both reviewers complain about the noise of the card. This means that the 290x performance gain over the 290 will not justify (if at all) the extra cost. Do you plan to improve the 290x later on ?
3) Any plans to change the referrence stock coolers or alternatively to offer a premium option for cooling (for example: water cooling as was the case with FX-9590)
4) If I remember correctly 7990 appeared very late (1 year later than single chip options). Is there a 299X (dual chip sollution) in the works ?
5) I hear a lot about advantages that general processing will have for bringing the GPU closer to the CPU (parallel workloads can be executed more efficiently by a graphics card) but is there any advantage for the GPU in having this close integration with the CPU (workloads that can be easier delegated from the GPU to the CPU) if yes please give some examples..
6) Are there any plans for developing a Radeon GPU specific for the mobile (mobile phones, tablets, smart wearables) segment ?
7) Will there be a GCN 1.2, 2.0 or are you already working on a future architecture ?
8) Steam has a huge number of subscribers and definitely has a working model that can rival that of gaming consoles. The fact that they are serious about building a console ecosystem around their service is not to be taken lightly. Nvidia was very quick to rally to Steam in order to counter your design wins for the console market. I know it's been asked before, but do you plan to sit this one out or will we see AMD getting involved in the Steam console(s) project.
9) And the last one: Did you have to make any sacrifices in the GPU architecture in order to ease the unification with the CPU ? If yes please give a few examples, if no please motivate.
Thanks in advance for your responses, and keep up the good work!
I hope that the editors from Tom's will create a nice "front page" article from all the information you gave out to us today for all the readers who missed this talks can also.
1) The gaming strategy you’re seeing today is the brainchild of the graphics GM Matt Skynner, along with his executive colleagues at AMD. It comes with the full support of the highest echelons at AMD.
2) I want to reiterate this answer: Plain and simple, THG and Tech Report have faulty boards. You can tell because
Sweclockers performed the same retail board test and got the expected results: performance in identical to the AMD-issued samples.
Every 290X should be running 2200 RPM in quiet mode, and every 290 should be running 2650 RPM. We will be releasing a driver today or tomorrow that corrects these rare and underperforming products, wherever they may exist.
3) I don’t sit in on these engineering meetings.
4) I cannot speculate on future products, I’m sorry.
5) I cannot think of any reverse examples, where offloading from the GPU to the CPU would be beneficial.
6) We have no plans to enter the smartphone market, but we’re already in tablets from companies like Vizio with our 4W (or less) APU parts.
7) Graphics Core Next is our basic architecture for the foreseeable future.
8) You will see AMD-powered Steam machines in 2014.
9) No, it’s more about changing the direction of CPU architecture to be harmonious with GPUs. Of course the GPU ISA has to be expanded to include things like unified memory addressing and C++ code execution, but this capability already exists within Graphics Core Next. So, on the GPU side, it’s all about extending the basic capabilities of the GPU, rather than changing the fundamentals to get GPGPU.