AMD Vega MegaThread! FAQ and Resources

Page 35 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Rogue Leader

It's a trap!
Moderator


Yeah but a die shrink that can fix the production issues of the original Vega, increase power efficiency, and improve performance making it a much better competitor than the original to Nvidia. Lets be honest as competitive as the Vega 56 was to the 1070 , the 1070ti came out to beat it, and then on top of that thanks to production issues and mining made it impossible to find anyway.
 
But that would also mean AMD is putting the same pig out with a tighter jacket to fight an old tiger and, probably, a young one soon.

I just think that the die shrink is not going to fix Vega's inherent GCN problem even if they manage to clock it higher and slap 32GB HBM in it.

You could say I'm just really not impressed with Vega so far, my dear RL, so if the Pig wears a new tighter jacket is still a Pig.

Cheers!
 

Rogue Leader

It's a trap!
Moderator


Well it is still early, I'm hopeful that they have learned from their mistake, we will see as more news comes out and when a consumer version looks more on the horizon. Strangely enough on todays front page is a test of the Powercolor Red Devil Vega 64. Cool looking card, but its also 1100 bucks. You'd have to be high to actually buy one.
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965
AMD’s new/old roadmap
Vega-7nm-32GB-HBM2-1000x563.jpg

https://videocardz.com/newz/amd-confirms-7nm-gpus-are-coming-to-gamers
 


Likely Navi is the one gamers are waiting on. The 7nm die shrink may make a dencet bump especially if clocks scale up nicely but then we are still talking competing with the 1080ti.
 
still depends on what AMD real focus are with the architecture. back in the days of terascale AMD is very competitive in die size and power consumption with more gaming oriented architecture. they start losing the edge on that aspect when they start adding more compute oriented stuff to their architecture. IMO there is no way they going to compete with nvidia with one architecture that is used for both compute and gaming solution. 7nm might give AMD some edge but that is not exclusive to AMD. nvidia also have access to TSMC 7nm.
 
My worry is that AMD tied its own hands (arms and legs!) with HSA. I would imagine if they make Navi not be full HSA compliant or maybe move onto "HSA v2" and just remove a lot of non-consumer oriented stuff from it, they'd be in a better position.

I liked the idea of Polaris/Vega as mid/high, but nVidia just crushed their roadmap with the 1080, lol. Now they're barely announcing Navi. I even wonder if they'll relegate Vega (current) to mid-range with the die shrink instead of revamping Polaris...

Oh welp, I hope AMD gets their act together in this arena like they did with the CPUs.

Cheers!
 
That might be forward support, since I really doubt AMD will give Vega 20 PCIe 4.0 with no MoBos (and CPUs) currently supporting it en-masse. I would imagine Navi will be PCIe 4.0 though. It would align better with PCIe 4.0 MoBos and CPU support (given refreshes timings).

Cheers!
 


Or it gives us a window into what AMD might have planned for Ryzen 2. That would be cool if they brought PCIe 4 with it.
 


though on consumer space do we even hit the limit for PCI-E 3.0?
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


1080Ti show ~1% to less than 1% difference in performance from PCI-E 3.0 to 2.0. It would likely be most beneficial for inclusion in newer motherboards for newer SSD devices since PCI-Express 3.0 x4 is taking up more lanes.
As of now, the PCI Express 4.0 standard has been finalized and officially released. The new protocol promises twice the per-lane bandwidth of PCI Express 3.0, allowing a GPU or other accelerator to transfer up to 64GB/s in a duplex x16 link. It’s also been a long time coming.
PCI-Express-1-Large.jpg



Edit: That's the difference from 3.0x16 to 2.0x16 for the 1080Ti.
 
I remember reading, somewhere, that PCIe 4.0 would increase the 300W cap for the x16 lane. I might be remembering wrong though.

That would be the only reason AMD might want to bring PCIe 4.0 quickly to the table. I would also imagine they want to increase the CPU bandwidth to peripherals soon, since X16+4 seems to be getting tight for mainstream already.

Cheers!
 

goldstone77

Distinguished
Aug 22, 2012
2,245
14
19,965


Correction: PCIe 4.0 won't support up to 300 watts of slot power
By Tim Schiesser on August 25, 2016, 7:00 PM

Unfortunately, a spokesperson for the PCI Special Interest Group (PCI-SIG) incorrectly stated to Tom's Hardware that slot power limits would be raised in the PCIe 4.0 standard. This is not the case: PCIe 4.0 will still have a 75 watt limit on slot power, the same as previous iterations of the spec.

The increased power limit instead refers to the total power draw of expansion cards. PCIe 3.0 cards were limited to a total power draw of 300 watts (75 watts from the motherboard slot, and 225 watts from external PCIe power cables). The PCIe 4.0 specification will raise this limit above 300 watts, allowing expansion cards to draw more than 225 watts from external cables.
https://www.techspot.com/news/66108-pcie-4-wont-support-300-watts-slot-power.html

Also,
Finally Found the Limit of PCIe x16 vs. x8 (Dual Titan Vs)
Gamers Nexus
Published on Dec 19, 2017

https://youtu.be/i8iE_sQBFXk?t=56

PCIe x16/x16 vs. x8/x8 (Dual Titan V Bandwidth Limit Test)
By Steve Burke Published December 19, 2017 at 6:22 pm

Conclusion: Finally Finding the Limits, but --
This isn’t representative of the whole. We’ve tested one game, here, and that’s about the limit of what is even compatible with 2x Titan Vs. Production software, like Blender or Premiere, won’t stress the PCIe interface in the same way – the cards don’t need to talk to each other, in these scenarios, and can operate independently on a tile-by-tile or frame-by-frame basis. Gaming puts more load on the PCIe bus as the cards transact more data to create each frame. These devices, as stated before, aren’t meant for gaming, so that’s largely a non-issue. They also aren’t really compatible in 2-way configurations, so that further eliminates the realism of this test.

What we are left with, however, is a somewhat strong case for waning PCIe bandwidth sufficiency as we move toward the next generation – likely named something other than Volta, but built atop it. SLI or HB SLI bridges may still be required on future nVidia designs, as it’s possible that a 1080 Ti successor could encounter this same issue, and would need an additional bridge to transact without limitations.
https://www.gamersnexus.net/guides/3176-dual-titan-v-bandwidth-limit-test-x8-vs-x16

Another good source of information: https://www.anandtech.com/show/11967/pcisig-finalizes-and-releasees-pcie-40-spec
 
Why polaris and not vega based? Using polaris would mean the card would lacked new features that being introduced with Vega. Though making weird decision ia not that strange for AMD. like 370 (7870/270 rebrand) that end up not having freesync support. Those that did not follow gpu new closely probably feel wierd when old series such as 260X have freedsync support while "new" card like 370 does not have it.
 
Status
Not open for further replies.