News AMD announces unified UDNA GPU architecture — bringing RDNA and CDNA together to take on Nvidia's CUDA ecosystem

So... They're bringing GCN back from the dead? LOL.

Christ...

EDIT: Just to add a bit more to my knee-jerk reaction to the overall information (thanks for the interviews, BTW!) in regards to my comment...

AMD is missing something crucial, which was very succintly pointed out in the article: longevity. It doesn't matter what you call it or market it or tell the world you'd be doing technically. The reason why CUDA is king is longevity and support. AMD needs to stop screwing around with the long term strategy and flip-flopping to much and stick to something for longer than 3 generations. Whatever they create, please do stick to it and support it. Anyone remembers HSA? What about Audio Acceleration? And a few other techs which they put out but didn't get adoption and were dropped, but were good ideas. Just, in very AMD fashion, pushed terribly bad into the market.

Regards.
 
Last edited:
Could Nvidia legally prevent AMD from making their GPUs capable of running CUDA code? If AMD could run CUDA code natively, they’d literally be right back in the game in the workplace.
 
I can’t fault the general message and their strategy in going unified but considering Huynh was evasive when you asked about a clear timeline of implementation; I guess I’ll believe it when I see it.
Huynh said: "we’re thinking about not just RDNA 5, RDNA 6, RDNA 7, but UDNA 6 and UDNA 7". Which I think is indicative of the time frame. We won't get to see the fruits of this until the RDNA 6 generation at least, so it's a few years down the road. Then again, it's not clear what he means, as it implies that we will have RNDA 6 alongside UDNA 6.
 
Huynh said: "we’re thinking about not just RDNA 5, RDNA 6, RDNA 7, but UDNA 6 and UDNA 7". Which I think is indicative of the time frame. We won't get to see the fruits of this until the RDNA 6 generation at least, so it's a few years down the road. Then again, it's not clear what he means, as it implies that we will have RNDA 6 alongside UDNA 6.
I kind of saw it as him letting us know that they are considering replacing RDNA6 with UDNA6, or they might do it at RDNA7. That they are weighing their options but don't know for sure when the replacement will occur yet. Since he announced it, they must have a good degree of certainty that they are planning to proceed, but since it seems like a bad idea I will also believe it when I see it.

Maybe it is what AMD needs to do to effectively get tensor type cores in their gaming GPUs to catch up with Nvidia and Intel in this regard.
 
  • Like
Reactions: atomicWAR
I can’t fault the general message and their strategy in going unified but considering Huynh was evasive when you asked about a clear timeline of implementation; I guess I’ll believe it when I see it.
I think RDNA 5 is already in the pipeline so I would assume its GPU's after that gen so maybe that is why he is being vague.

Could Nvidia legally prevent AMD from making their GPUs capable of running CUDA code? If AMD could run CUDA code natively, they’d literally be right back in the game in the workplace.
Native will never happen there are however other options.

https://www.tomshardware.com/tech-i...-enables-cuda-applications-to-run-on-amd-gpus

 
  • Like
Reactions: Elusive Ruse
Could Nvidia legally prevent AMD from making their GPUs capable of running CUDA code? If AMD could run CUDA code natively, they’d literally be right back in the game in the workplace.
AMD GPUs can run CUDA, AMD pulled the plug on the software project. Technically there is no physical reason AMD couldn’t make the hardware interface CUDA compliant but… Nvidia have the user/license agreement tied down such that CUDA code can only be run on Nvidia hardware… the lawyers would get richer.
 
Could Nvidia legally prevent AMD from making their GPUs capable of running CUDA code? If AMD could run CUDA code natively, they’d literally be right back in the game in the workplace.
Nvidia’s EULA/ToS and their aggressive enforcement thereof has made it legally risky to sell or use a translation layer in a corporate environment, and AMD (or Intel) coming out with native CUDA support effectively impossible.
 
Being devil's advocate: nVidia can license the use of CUDA. They more than likely won't, much like Intel did not want AMD to use X86 (slightly different, but applies here) and both AMD and Intel could just help OpenCL be relevant, but they aren't because they want their own stuff to be relevant, which is hilarious to see (how they fail).

I'd even say Intel has seen more success than AMD on that front with oneAPI. ROCm has seen adoption, but at the end they're just not as good as a common open standard, even if the rely or use OCL heavily (BLAS for instance). Expand OCL IMO, but they won't. Maybe Khronos would be to blame there? Not sure. Just throw money at the problem, I guess.

Regards.
 
Could Nvidia legally prevent AMD from making their GPUs capable of running CUDA code?
yes. in fact ZLUDA was able to run cuda on amd gpu...and NVIDIA within i think a month of dev making it public on github updated their terms of use that CUDA can only be used on nvidia hardware.

now this "could" change as France is in legal stance over nvidia for its cuda dominance but that wont play out for yrs and nothing may change.
 
  • Like
Reactions: bit_user
yes. in fact ZLUDA was able to run cuda on amd gpu...and NVIDIA within i think a month of dev making it public on github updated their terms of use that CUDA can only be used on nvidia hardware.

now this "could" change as France is in legal stance over nvidia for its cuda dominance but that wont play out for yrs and nothing may change.
That’s the project that I was thinking about, it was initially funded by AMD.. they stopped funding it and it was subsequently open sourced.
 
I can't figure out what this guy is talking about.
They both "have scale", yet are still trying to grow to "hundreds" of developers.

If AMD wants hundreds of developers, then they should hire them. Lead by example and finish building the product instead of expecting their customers to do it.
 
  • Like
Reactions: aberkae
They're not going to gain meaningful market share as long as they keep their pricing so close to Nvidia. Undercut them by a third or more while offering near equal performance and a 16GB VRAM size and then things will move.
 
They're not going to gain meaningful market share as long as they keep their pricing so close to Nvidia. Undercut them by a third or more while offering near equal performance and a 16GB VRAM size and then things will move.
Speaking of vram, currently that is their only advantage. They can easily get developers to fill that 16 gigs of vram with high quality textures without any performance hit. The tech media would praise AMD gpus. Probably the easiest short term win play they have to move the needle imo.
It would be a great for gamers. Also Nvidia started using ai for driver development years ago. AMD should do the same.
Imagine all those 12 gig and 8 gig Nvidia GPUs would turn the table and discussion off rt performance and directly on to vram. Those rx 6800s, 6800xt, 7800xt, and soon 8800xt will all do circles around Nvidias more expensive counterparts.
I wonder if most Nvidia sponsored titles like Cyberpunk 2077 and Blackmyth Wukong have low quality textures but heavily focus on ray tracing is because they would get slaughtered in the mid range.🤔
 
just get the PRICE and hopefully performance up and be competitive......
Even if an AMD solution is “well priced” the traction Nvidia has in the market pushes CUDA to be a de facto standard.
Moving companies and developers away from a solution with many years of investment, time and money, is more than difficult. It takes years…
 
yes. in fact ZLUDA was able to run cuda on amd gpu...and NVIDIA within i think a month of dev making it public on github updated their terms of use that CUDA can only be used on nvidia hardware.

now this "could" change as France is in legal stance over nvidia for its cuda dominance but that wont play out for yrs and nothing may change.
Maybe anti-trust news is true. Either way, in the short term nvidias AI gamble a couple of decades ago seems to have brilliant but if your charging $60,000 a pop for Blackwell 200 ai gpus companies will start looking at alternatives. At some point without some serious return on investment companies will not continue to fund this expansion so other players will step in or be given a chance. Look at the work Mistral is doing … do you really need a 40 trillion parameter set LLM to be effective and the horse power to train it in a reasonable amount of time? When a more lightweight LLM can be just as effective?
 
Nvidia’s EULA/ToS and their aggressive enforcement thereof has made it legally risky to sell or use a translation layer in a corporate environment, and AMD (or Intel) coming out with native CUDA support effectively impossible.
I keep wondering, what's the difference between this and the Google vs Oracle lawsuit from a couple years ago? Why was Google creating an open version of Java fine but this isn't? Or GNU Octave instead of Matlab? Why is CUDA so special that it can't be copied?
 
I keep wondering, what's the difference between this and the Google vs Oracle lawsuit from a couple years ago? Why was Google creating an open version of Java fine but this isn't? Or GNU Octave instead of Matlab? Why is CUDA so special that it can't be copied?
SCOTUS ruled that Google’s use of the Java API was transformative and would open a new class of devices, smartphones.
https://www.supremecourt.gov/opinions/20pdf/18-956_d18f.pdf Page 3, paragraph 2.

Any “cloning” of CUDA such as ZLUDA would need a shedload of good legal advice before implementation and release..
 
but if your charging $60,000 a pop for Blackwell 200 ai gpus companies will start looking at alternatives.
not really. you are thinking about a normal persons view of $60k.
Once you cross the 10B (and the ones buying these are in the 100B+) value of company mere $60k is just cost of doing business & can possibly be turned into negative value thus lowering the company as a wholes business taxes.


Nvidia already did forced to remove the program that allowed AMD gpus to run CUDA based programs... so yeah.
Nvidia didn't.
Nvidia changed ToU to state only nvidia hardware can run CUDA but thats basically threat to their customers (as they acant actually tell who uses it)
AMD is one who made the dev of ZLUDA take it down and start re-doing the code from prior to the AMD period code.
 
  • Like
Reactions: bit_user
AMD GPUs can run CUDA, AMD pulled the plug on the software project. Technically there is no physical reason AMD couldn’t make the hardware interface CUDA compliant but… Nvidia have the user/license agreement tied down such that CUDA code can only be run on Nvidia hardware… the lawyers would get richer.
Trust me, it's all "Legal Reasons".
nVIDIA will SUE the crap out of ANYBODY trying to use CUDA on their Non-nVIDIA GPU's.
Plain & Simple.

CUDA was designed to be 100% Proprietary to nVIDIA hardware and NOBODY else.
 
When I first heard AMD talk about splitting their GPU architectures into RDNA and CDNA, I thought it sounded good, as the idea was to remove circuitry in the RDNA architectures that would allow for a larger silicon budget dedicated to gaming hardware and function, while removing some of the transistor hardware optimization geared around data-center workloads. Vice versa for CDNA, for data-center optimized silicon. It did sound excellent at the time.

Enter the "AI" market craze. That puts a different spin on things, eh? If it is true that "AI" turns out to have the long life ahead that proponents are advertising, then UDNA makes a great deal of sense as transistor designs and layouts for "AI" types of computation have a lot in common with data-center optimization, while gaming function and performance are always important. Time and future GPUs will tell, I guess...😉
 
  • Like
Reactions: jlake3