Discussion: Polaris, AMD's 4th Gen GCN Architecture

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
Hmm... DX12 will become standard, eventually, so that is some really good news for CF/SLI, me thinks. Unless Vulkan takes over. Now I have no background with multi-gpu setups and majority of people say it's better to go with a single-gpu, but might DX12 alleviate CF/SLI issues a bit?
 
I wouldn't mind slapping in a spare card to go with my 980Ti if it gives certain DX12 games a boost (though it will only work with a DX12 game that has been expressly written to take advantage of multi-GPU of course) and I also have intregrated graphics in my 6700K that could lend a hand!

So I'm all for it and let's hope devs make the effort to implement it. I hear it's not easy and the smaller budget devs might not be able to pioneer this sort of thing.

My only issue is although I have 4 x PCIe 3.0 slots on my mb, if I were to run more than one card, it drops from x16 to being x8 + x8 (no PLX chip). Although I'm not sure that will make any noticeable difference?
 
I wonder if it's possible to make a piece of software that would "fool" a game that it is being run by a single GPU while dividing the amount of work between the 2/3/4 GPU's in the system with no noticeable delay. This way the DEVs wouldn't have to make any multi gpu profiles.
Maybe I'll take some programming classes, study electronics and then become a millionaire. Huh.
 


It'll take a lot more than some programming classes to get to that point. Creating software like that would be a lot harder than, say, creating a game. When people program games, the majority of the stuff is already pre-programmed as frameworks. .Net framework is the most common for Windows, and for graphics you have the directX framework and other APIs. If they really wanted, they could not even use DirectX and program their own graphics for games, but that's be too difficult, as those people who actually program the frameworks themselves I believe are the most experienced coders and would be the only ones capable of doing something like you propose.
 


Oh I am very well aware that the sunday programmer's circle won't do. It was more of a joke from my side.
I just though maybe you could group and name different workflows that a game creates and based on the current GPU usage you could assign tasks to the least utilized unit. But again, I have no idea about the actual strains a game puts on the hardware so it's more of a theoretical daydreaming than a groundbreaking idea ^^
 
So... apparently someone answered this thread and even my newest post has been quoted once but I can't see anything beyond that last post. Anyone else?
I had the feeling some of my posts don't show up for others. It's the first time there's an update in my followed threads but I can't see it, though.

Hello? Is this the police? I found a bug in the matrix.
 


I would wonder if it is actually an isue with the game in how it is generation that content and not the driver.

Of course AMD can call nVidia out but doing so on a game they have been heavily involved in doesn't mean as much as a game that neither were heavily involved in.

That would be like Intel optimizing software for a 10 core Broadwell-E part and calling out AMD for their performance in it.
 
@TehPenguin haha, that was me. I made a post, thought about it a bit more, then realized that I wasn't really sure of what I posted so I deleted it. The gist of it was that I think graphics APIs and drivers already already take care of some of the functionality you're talking about, taking it out of game developers hands. But I really don't know much about it at the end of the day.
 


Actually with DX12 the game developers will be much more responsible for making SLI and CFX work properly since the whole idea behind DX12 is to lighten the API load on the GPU.
 


You mean CPU, not?
 
No I mean API. DX12 lightens the API from the GPU. I might have worded it weird so let me try to make it clearer.

Before APIs software was written directly to the hardware. While this meant much better performance it also meant higher risk of crashing the entire system if one driver crashed. APIs were developed as a buffer between the hardware and the driver/software to help prevent it from causing Windows to crash with the game or application. Problem is that the APIs got pretty bloated.

DX12 is doing what console APIs have done for years, allowing the developers to write code closer to the hardware. In essence DX12 is lightening the API level. One benefit is that it will also allow for the CPU load to be lightened thus increasing performance.
 


You're 40w short, the 380 is a 190w part.

http://www.anandtech.com/show/9387/amd-radeon-300-series/2
 


image.jpg
 
While having the bragging rights on the fastest GPU is a win and a boost to the PR for whoever has the "trophy". Most of the cards sold are in the entry to mid ranged segment. Few can afford the top-end cards. So if AMD knows it can't capture the crown why not go after the bread and butter segment. It makes perfect sense to target that segment. There are lots of AMD proponents that will happily purchase them. And they may get some fence sitters to try AMD. In the last 900 series Nvidia had a big whole in the price between the 950 and the 960. I would expect AMD to go after that price segment since it is a weak spot for Nvidia.

I like Kyle's website and I'm on it a lot but that AMD article seemed like something written by one really pissed off dude. It isn't always good to rely on information on people that have been fired or quit working for AMD for the majority of your information. Since their perspectives can be far from objective on the true state of things at AMD.
 


I said similar about Richard Huddy but that fell on deaf ears.
 


Yes I messed up the name, I meant to say RX480. I'm well aware the 380 is a 190W card.
 


Ah, lol !! Is he still wearing white suites? How is Mantle these days?
 



Mantle is now Vulkan dude.
 


Oh, I thought Vulkan was the new version of OpenGL by Khronos that AMD donated some of the Mantle code to it. I guess it was more then that after all. Thanks for the update.
 


I read somewhere that AMD was skipping GDDR5x. It not being easily worked into there design with much benefit. Searching for a link.
 


That wouldn't surprise me- GDDR5x is a half measure, and AMD co-developed HBM to be their new memory standard.

With GDDR5 running up to 8ghz, GDDR5x at 10ghz isn't really that impressive, it's still quite a ways short of HBM 1, let alone the upcoming HBM 2, which is apparently what they are using for the upcoming Vega cards towards the end of this year / early 2017.
 


It would make a lot of sense from the costs point of view. If you have to develop interfaces for GDDR5, GDDR5X and HBM1/2, then your cost per GPU is going to get tricky.

Even though, from what I read, makers need to do small modifications for the GDDR5 controllers to support GDDR5X, it is still an extra cost. You need more validation (QA) and all that. I would imagine that AMD will wait until GDDR5X goes down in price to move from GDDR5 to it and keep on using same old GDDR5 to the limits. And use HBM2 for everything $450+ (I would imagine). If Vega keeps on using GDDR5, it would be surprising though. I'd expect it to use HBM2.

Cheers!
 


As opposed to current setups where they have interfaces for DDR3, GDDR5 and HBM?

I am just saying that I do not expect any other HBM based GPUs right now. If the 490 has HBM and is close in performance to the Fury X it will cut into sales. I assume that the 490 series will stick with either GDDR5 or GDDR5X. I don't see them possibly undercutting sales of a more profitable part.

Next year, however, will probably be HBM 1 for mid to high end with HBM2 being in their top end GPU.

Again it depends on costs too.
 


But one thing is have the IMC support and the other is the routing of pins and packaging. Well, I guess GDDR5X must be the same as GDDR5... I will take a closer look; I just read very briefly the tech sheet for it. Assuming you're right on that front, then you might be correct. In regards to DDR3, I have no idea how the pin layout plays there. Maybe they use a sub-set of the GDDR5 layout? In any case, at least we know HBM1/2 and GDDR5/X are not compatible, haha.

And forgot to mention that AMD *is* involved in the HBM development. I can't remember where I read it, but it's easy enough to verify I guess.

So, Vega having HBM2 would be my expectation, but I don't see HBM1/2 trickling down to mainstream as soon as next year though, Jimmy. Specially when the competing solution is GDDR5/X, which is cheap enough for all mid to lower segments, I'd say. AMD (or TSMC/GloFo) would have to have nailed down the packaging costs to a level where is no longer an issue. I haven't seen any reports on that front, but you never know, right? Haha.

Cheers!
 
Status
Not open for further replies.

TRENDING THREADS