AMD Vega 20 Supports PCIe 4.0

Status
Not open for further replies.

Giroro

Splendid
I must not understand what "Vega 20" is, because based on AMD's current naming scheme that would make it a chopped down current-gen GPU with 1/3 the compute units of Vega 64.
But there's no way they would pair 32GB HBM2 with something like that.
 
So maybe this is a view into what Ryzen 2 will bring with a new chipset.

People with current Ryzen products would still be able to use the Ryzen 2 CPUs, as all PCIe standards are backwards compatible, but without the new chipset, will remain on PCIe3. Since AMD said they will stay on the same socket for the next year and a half.
 

Gillerer

Distinguished
Sep 23, 2013
360
81
18,940
As far as anyone (outside AMD) knows, the 7 nm Vega 20 will only make its way into professional Radeon Instinct Vega cards (the one in the "reveal" link in the article). At least AMD have given no indication of planning on releasing new consumer/gaming GPUs before Navi. That means there is really no need for PCIe 4 to be supported by Ryzen CPUs or on AM4 motherboards for now - at least not due to Vega 20 - Navi may be another matter.

The reason AMD went with pro-only for Vega 7 nm has supposedly to do with adopting GlobalFoundries' new 7 nm process and the expectation of difficulties in sustaining high-enough yields in the first few months to a year. By selling the GPUs as professional products (with appropriate high margins), AMD can make a profit with considerably lower number of viable dies off the wafer. The volumes on professional GPUs are also lower than consumer ones, so GF can more easily continue to tune their production without affecting end output too much.
 


Much like Tesla. Makes sense. They probably want to break into the HPC and professional market more.

It will be interesting to see what the 7nm yields are like to start.

Also I don't think HBM2 can make PCIe 4 sweat. PCIe 3 is fine with current GPUs with HBM2. In fact PCIe 4 will probably further increase the odds that consumer cards wont saturate the lanes.
 

DerekA_C

Prominent
Mar 1, 2017
177
0
690


I would agree with not saturating the lanes until they start doing mGPU or 3d stacked GPU pcb's which GF said they are now capable of doing as of 2 months ago.
 

P1nky

Distinguished
May 26, 2009
60
25
18,560


AMD publicly said that Vega 7nm is built by TSMC.

 

bit_user

Polypheme
Ambassador

In the linked Computex article, there's mention of Vega 10 using PCIe for inter-device communication and reference to NVLink. Much like NVLink, AMD's Infinity Fabric can run a cache-coherent interconnect over PCIe. That's what it's about - multi-GPU configurations for deep learning and HPC.

I expect EPYC 7 nm will also support PCIe 4.0.
 
"..32GB of HBM2 memory..."

Not sure that the CAPACITY (amount) of memory has much affect; in fact as software gets better at buffering having more capacity may REDUCE the PCIe bandwidth required though we're not really there yet. Memory bandwidth and GPU performance are usually paired for VALUE but not necessarily performance as some use cases typically benefit from slightly faster memory so probably (I'm no expert) the Video Memory bandwidth is the absolute indicator of what PCIe bandwidth needs to be to prevent any losses from the motherboard bus.
 
"Here's hoping we'll soon see a single card that's capable of pushing those new Asus and Acer 4K 144Hz monitors to their limits."

Riiigghht...

Before I even mention GPU's you first have a CPU bottleneck before a solid 4K/144FPS happens. It's going to be YEARS before game code gets multi-threaded enough to see that on a regular basis.

Historically gains are not huge between GPU generations. The "low hanging fruit" of the earlier years has been replaced with slight architectural modifications and frequency bumps.

For "single GPU" I think we'll see our first huge jump on the high-end products once multi-chip GPU products surface (topologically a single GPU but physically it is a "glued" or similar process to join separate chips).

In the short-term using a good GSYNC (NVidia GPU) or Freesync 2 HDR (AMD GPU) monitor and aiming for an appropriate FPS is going to be the best solution for a while (like 100FPS+ for shooters and 60FPS or whatever for Tomb Raider and other slower games to find the optimal balance between responsiveness and visual fidelity).
 

Valantar

Honorable
Nov 21, 2014
118
1
10,695
Vega 20 is 7nm Vega for server/HPC/AI. These workloads generally need far more bandwidth than any consumer workload, so PCIe 4 makes perfect sense.

As for this coming to consumers: nope. AMD has all but confirmed that this silicon design is purely focused on the AI and compute crowds, with plenty of double and half/quarter precision hardware that gamers have no use for. A large die with near-zero performance gains and plenty of unutilized silicon area would be a terrible idea to launch as a flagship GPU. Suppose they could make a new Frontier Edition to match the Titan V?
 

bit_user

Polypheme
Ambassador

Leaving the potential of CPU bottlenecks aside, AMD's Computex presentation (https://www.tomshardware.com/news/amd-7nm-gpu-vega-gaming,37228.html) included a claim that Vega 20 is 1.35x as fast as Vega 10 (which we know as Vega 64, in its full configuration). I'm not sure they said 1.35x as fast at what, but if we take this as an across-the-board figure, that would only put it in range of the GTX 1080 Ti. So, no ground-breaking performance gains, but 35% faster also isn't trivial.



    ■ If you carefully read what AMD has said, they never ruled out the possibility of it reaching consumers. It's just not their initial focus.
    ■ Did they actually say it has more double-precision units than Vega 10? How much?
    ■ Consumers can use half-precision. Just look at the Far Cry 5 benchmarks (https://www.tomshardware.com/reviews/far-cry-5-performance-benchmark-ultra,5552-7.html). So far, both AMD and Intel have fast half-precision packed math in their consumer GPUs. We'll see if Nvidia's next gen follows this trend.
    ■ Nvidia's consumer Pascal GPUs have "worthless" 8-bit dot product support. So, I guess those aren't for consumers?



As I mentioned, they claim 1.35x performance gains. I'd hardly call that a rounding error. Also, they demo'd it running Cinema 4D - not exactly a game, but also not deep learning or really even HPC (I doubt it uses much 64-bit arithmetic).

In case you missed it:

https://www.tomshardware.com/news/amd-7nm-gpu-vega-gaming,37228.html
 


Without context those performance claims are about as good as Half Life 3. Nothing.

Most likely it is an increase if TFLOPS as that's what a lot of the HPC cards tend to be rated for which Vega 10 64 is already faster than a GTX 1080 Ti (12.5TFLOPS vs 11.3TFLOPS). Of course those are stock numbers and don't include AiB overclocks and such but that's where I would assume a chip currently targeting HPC will look for performance.
 

bit_user

Polypheme
Ambassador

I did point out that I didn't see any specific benchmarks cited. However, given that it's a significant node shrink, about the only thing we can rule out is:
near-zero performance gains

If @Valantar has a good source on that, I wish he'd cite it
 

Gillerer

Distinguished
Sep 23, 2013
360
81
18,940
Since GPUs are well served by a PCIe 3.0 x8 connection, I feel that PCIe 4.0 would mainly benefit storage applications. Having an M.2 drive on PCIe 4.0 x4 with transfer speeds approaching 8 GB/s? I want one. (Probably can't afford one.)

Also, since you no longer *have* to have 8 lanes for the GPU, it leaves lanes for other things.
 

bit_user

Polypheme
Ambassador

You mean like their Pro SSG series?

https://www.amd.com/en/products/professional-graphics/radeon-pro-ssg

That's an interesting idea, but it pretty much has to be storage that's dedicated to the GPU (as in the Pro SSG series) - not a normal volume used by the OS.
 

Gillerer

Distinguished
Sep 23, 2013
360
81
18,940
No, I'm talking about in the future having the same 4 lanes for an M.2, but those lanes being twice as fast as currently, hence doubling the theoretical speed limits on NVMe M.2 drives.
 


That's pretty much every PCIe version. The lanes almost never change count, most GPUs will probably still use 16 lanes but never saturate it.
 

bit_user

Polypheme
Ambassador

I wonder if some mid-range GPUs might be only x8, like we currently see with lower-end models. This could be another way that Nvidia and AMD try to differentiate their professional & datacenter products from the consumer models.
 
Status
Not open for further replies.