News AMD Drops FSR 2.0 Source Code, Takes Shots at DLSS and XeSS

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
While machine learning is a thing I want to remind everyone that “AI” is just a buzzword as there is no real artificial intelligence and we are in fact nowhere near it. The only difference here is the approach and it’s calculated in specialized cores rather than the usual shading cores. XeSS, if intel didn’t lie about it’s performance, could be way better than DLSS since it can run on specialized and non specialized cores as well, making it far more variable. DLSS will lose this game on the long run against FSR, it’s FreeSync vs Gsync again and we all know what happened. Proprietary solutions will always lose unless they are way better, which isn’t the case here anymore.
 
While machine learning is a thing I want to remind everyone that “AI” is just a buzzword as there is no real artificial intelligence and we are in fact nowhere near it.
So are you some kind of renowned academic in computer science that has the authority to define an industry wide term? Or some kind of philosophy scholar that has clout over the idea of what "intelligence" even is?

The only difference here is the approach and it’s calculated in specialized cores rather than the usual shading cores. XeSS, if intel didn’t lie about it’s performance, could be way better than DLSS since it can run on specialized and non specialized cores as well, making it far more variable.
While XeSS can run on non-specialized hardware, its performance suffers doing so.

DLSS will lose this game on the long run against FSR, it’s FreeSync vs Gsync again and we all know what happened. Proprietary solutions will always lose unless they are way better, which isn’t the case here anymore.
Considering G-Sync's been around since 2013 and it's still being used in high-end monitors, I don't think DLSS will die any time soon. Plus CUDA is still widely used despite OpenCL also being a thing.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
So are you some kind of renowned academic in computer science that has the authority to define an industry wide term? Or some kind of philosophy scholar that has clout over the idea of what "intelligence" even is?
It’s widely known if you read a bit about “AI” and what it really means that it was not achieved yet. Just a few days ago there was a article here about a ousted google engineer that said they had a real AI, he was fired shortly after and google called everything he said back by stating it was in fact not a real sentient AI. Achieving a real AI needs something that can resemble a humans brain, we didn’t yet fully understand how the brain works, and much less able to build one of our own. Go and inform yourself a bit instead of trying to pick my posts apart, which won’t succeed anyway.

While XeSS can run on non-specialized hardware, its performance suffers from it.
I wouldn’t call a few percentages “suffering” you’re not well informed here as well.
Considering G-Sync's been around since 2013 and it's still being used in high-end monitors, I don't think DLSS will die any time soon. Plus CUDA is still widely used despite OpenCL also being a thing.
Gsync is barely used in monitors and even most high end monitors don’t use it anymore. Gsync had a short relevant time, now it’s largely replaced by FreeSync, this is also a well known fact. Do you inform yourself on tech, or just in the forum to pick apart things others posted and start useless discussions? I don’t see much sense in your post and I’m not doing your homework for you either.

Comparing the CUDA and OpenCL situation with these makes no sense, as CUDA is preferred because it works really well, while the solutions of the competition don’t. This could change in the coming years as well. I see you have a biased Nvidia agenda here, not good.
 
D

Deleted member 431422

Guest
Good move AMD. If FSR 2.0 is pretty much as good as DLSS and it's royalty free it's the same case we had with other formats in the past. As long as something can be easily and widely adopted it doesn't matter if there are better solutions. Those better things are usually tied to one manufacturer or need expensive fees. Just like Betamax lost with VHS. It was better, but tied to Sony and it's products and never got traction.
 

twotwotwo

Reputable
Aug 10, 2019
47
18
4,535
3 days? 4 weeks?
How many programmers for that duration?

The README (or the presentations) breaks down what has to be done. The reason it might be a quick project is that if you used another flavor of temporal AA or the engine already supports it, you mostly plug the same inputs/outputs into FSR. The different types of TAA mostly do the same sort of thing.

If you're not so lucky, you need to change your renderer to output motion vectors and a depth buffer, which allows mapping pixels in previous frames to the current frame, you need to separate out some post-upscale steps (drawing the HUD, some effects), and need to jitter the camera position so a series of frames gets that sub-pixel-level detail. You can provide some "bonus" stuff like a mask that communicates "this part of the image may not be well predicted by past frames."

This is a neat visual breakdown of what TAA does: https://www.elopezr.com/temporal-aa-and-the-quest-for-the-holy-trail/ .
The actual README isn't so bad at breaking down the parts : https://github.com/GPUOpen-Effects/FidelityFX-FSR2 .

Unrelated to the quote, but a relatively simple addition that seems like it could have real payoff is smart/dynamic quality level--render at a higher quality level (or native) when possible, switch to lower ones as needed to hit some minimum framerate.
 
I have a serious problem with what AMD showed here...

it's either "~4 weeks" or "4 weeks+" (which is analogous to ">4 weeks"). Using both at the same time is stupid and redundant (to a degree). How do you even read that? "It's around over 4 weeks" or "it's around 4 weeks plus"? XD

Oof, that was so much anger. Sorry.

As for what this means, well, I'm not entirely sure, but it is nice when Companies just open the door for the world to improve on what they have started.

As for the debacle of "AI" and "machine learning". They're basically buzzwords that are annoying just like when any other term is over abused and, sometimes, incorrectly used. "Cloud", anyone? "Synergy" anyone? Machine learning is not useless; far from it. It allows to narrow down algorithms based on heuristics that would otherwise take a long time for humans to do by hand. Whenever you have a problem that the exact result is too costly (usually non-polynomial in cost; NP), you want heuristics to help you get a close/good enough result and yadda yadda. So, the "AI" side of things on Consumer is just saying "hey, we have some algorithms based on running heuristics quite a lot and they'll improve things over generic algorithms or less refined heuristics" and most of those are math calculations that run at quarter/half precision (FP8 or FP16) since you are not looking for accurate results (lots of decimals) but fast operations. Is this truly useful? Flip a coin? It is truly "case by case". Sometimes a properly trained algorithm can help a lot, but that takes a lot of iterations and also requires the complexity of the solution is rather high. So, that brings us to the "temporal" solution based upscale. Is it a hard problem to solve from the algorithmic point of view? No, not really I'd say. The proof is in the pudding. How much better is DLSS 2.x over FSR 2.0? Within strike distance of the "generic" heuristics used by FSR, no? And even then, FSR has some image quality wins I'd say (subjective, so I won't argue that I can be wrong here). Maybe nVidia needs to train the AI more per game? Maybe they need to improve the backbone of the Tensor cores more so the heuristics can be more accurate or process more data per pass? Ugh, so much "whatifism", so I'll just stop here.

Anyway, again, good to see more (F?)OSS stuff.

Regards.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
I have a serious problem with what AMD showed here...

it's either "~4 weeks" or "4 weeks+" (which is analogous to ">4 weeks"). Using both at the same time is stupid and redundant (to a degree). How do you even read that? "It's around over 4 weeks" or "it's around 4 weeks plus"? XD

Oof, that was so much anger. Sorry.

As for what this means, well, I'm not entirely sure, but it is nice when Companies just open the door for the world to improve on what they have started.

As for the debacle of "AI" and "machine learning". They're basically buzzwords that are annoying just like when any other term is over abused and, sometimes, incorrectly used. "Cloud", anyone? "Synergy" anyone? Machine learning is not useless; far from it. It allows to narrow down algorithms based on heuristics that would otherwise take a long time for humans to do by hand. Whenever you have a problem that the exact result is too costly (usually non-polynomial in cost; NP), you want heuristics to help you get a close/good enough result and yadda yadda. So, the "AI" side of things on Consumer is just saying "hey, we have some algorithms based on running heuristics quite a lot and they'll improve things over generic algorithms or less refined heuristics" and most of those are math calculations that run at quarter/half precision (FP8 or FP16) since you are not looking for accurate results (lots of decimals) but fast operations. Is this truly useful? Flip a coin? It is truly "case by case". Sometimes a properly trained algorithm can help a lot, but that takes a lot of iterations and also requires the complexity of the solution is rather high. So, that brings us to the "temporal" solution based upscale. Is it a hard problem to solve from the algorithmic point of view? No, not really I'd say. The proof is in the pudding. How much better is DLSS 2.x over FSR 2.0? Within strike distance of the "generic" heuristics used by FSR, no? And even then, FSR has some image quality wins I'd say (subjective, so I won't argue that I can be wrong here). Maybe nVidia needs to train the AI more per game? Maybe they need to improve the backbone of the Tensor cores more so the heuristics can be more accurate or process more data per pass? Ugh, so much "whatifism", so I'll just stop here.

Anyway, again, good to see more (F?)OSS stuff.

Regards.
First of all AI is a buzzword in my opinion, machine learning is just the correct usage of language compared to that, it’s just facts.

Secondly, as far as I know, DLSS 1.0 used trained “AI” via supercomputer and since 2.0 it’s just trained via tensor cores in the GPUs, so it’s very comparable to what FSR2.0 does, if you ask me. Both calculate imagery to upscale lower res to higher res, it’s not that complicated. Tensor cores are also just specialized cores, that can calculate certain data more efficient than the regular shaders.
 
First of all AI is a buzzword in my opinion, machine learning is just the correct usage of language compared to that, it’s just facts.

Secondly, as far as I know, DLSS 1.0 used trained “AI” via supercomputer and since 2.0 it’s just trained via tensor cores in the GPUs, so it’s very comparable to what FSR2.0 does, if you ask me. Both calculate imagery to upscale lower res to higher res, it’s not that complicated. Tensor cores are also just specialized cores, that can calculate certain data more efficient than the regular shaders.
I mean, if we want to be pedantic, both are kind of wrong. Unless you believe a machine can have "intelligence" or it can "learn". The act of learning is interesting by itself, as it requires a certain level of introspection and acknowledgement that, to be honest, machines do not have and, maybe, can't have. They could emulate it (exact results vs approximation can be defined) and start from there, I guess, but never at human capacity? As for "intelligence", well, depends on how you define it. Capacity to resolve problems? Capability of analysis? Calculations per second? Heh. I'm not sure as there's plenty definitions out there that can be valid from a psychological point of view. Same-ish with learning, but I'm inclined to use the "introspective" one as it makes the most sense to me: there can only be learning when you can look back and notice a change in knowledge.

Interesting topic for sure. Worthy of having a BBQ to discuss it over, haha.

Regards.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
I mean, if we want to be pedantic, both are kind of wrong. Unless you believe a machine can have "intelligence" or it can "learn". The act of learning is interesting by itself, as it requires a certain level of introspection and acknowledgement that, to be honest, machines do not have and, maybe, can't have. They could emulate it (exact results vs approximation can be defined) and start from there, I guess, but never at human capacity? As for "intelligence", well, depends on how you define it. Capacity to resolve problems? Capability of analysis? Calculations per second? Heh. I'm not sure as there's plenty definitions out there that can be valid from a psychological point of view. Same-ish with learning, but I'm inclined to use the "introspective" one as it makes the most sense to me: there can only be learning when you can look back and notice a change in knowledge.

Interesting topic for sure. Worthy of having a BBQ to discuss it over, haha.

Regards.
Yea you’re right I kinda missed the “learning” in machine learning, but I think the “machine” sets it off, it’s just “machine learning” haha. AI on other hand, as you well explained, it’s just nonsense, as machines can’t really learn, they’re just programmed. At the same time they aren’t sentient, which is another important metric to consider before you can compare them to humans or even animals. In Star Trek, yes it’s just fiction I know, they said regular computers were not possible to achieve this, but positronic ones were, in general it was well explained in Star Trek and other sci fi shows, the tech nonsense aside. AI must essentially be indistinguishable from regular life, or even superior, otherwise it’s not real AI.
 
Yea you’re right I kinda missed the “learning” in machine learning, but I think the “machine” sets it off, it’s just “machine learning” haha. AI on other hand, as you well explained, it’s just nonsense, as machines can’t really learn, they’re just programmed. At the same time they aren’t sentient, which is another important metric to consider before you can compare them to humans or even animals. In Star Trek, yes it’s just fiction I know, they said regular computers were not possible to achieve this, but positronic ones were, in general it was well explained in Star Trek and other sci fi shows, the tech nonsense aside. AI must essentially be indistinguishable from regular life, or even superior, otherwise it’s not real AI.
And don't forget the backbone of it all: imagination.

Regards :p
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
And don't forget the backbone of it all: imagination.

Regards :p
Absolutely, creativity and imagination are central to this definition. For now we only see machines mashing up data to create what it is trained to do, or often just total nonsense, like one of those funny image apps. Siri observing the user to give recommendations, isn’t very intelligent either, but it’s getting there.
 

husker

Distinguished
Oct 2, 2009
1,202
220
19,670
A Sentient A.I. is a different concept from simple A.I. Simple A.I. has been around for a very long time, e.g., a computer program that can play a decent game of tic-tac-toe is a simple A.I. in my opinion. It may, on the surface, appear to be thinking, but the "thinking" it displays is artificial - a pure smoke and mirrors algorithm to behave in a way that mimics true though in that one specific way. These simple A.I.'s can get very complex, but in the end are simply following a set of rules. Learning A.I.'s are another level up which are also following a set of rules that are constantly being updated through some type of feedback loop. The human creator(s) may not know what the rules are anymore, but they are there. A much more elusive (and yet to be created) thing is a sentient A.I. which is self aware. I would argue that it should no longer be called an "artificial" intelligence, since its intelligence would be quite real, as opposed to the artificial appearance of intelligence in the previous examples. Instead I would call it a "non-biological intelligence" or something along those lines. This would clear up a lot of the miscommunication that happens when people talk about such things.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
If you say “simple” AI, I don’t consider real intelligence “simple” so the I in the word AI doesn’t make any sense at that point. That’s just stretching it at that point and not real AI, typical buzzword abuse of the moniker. Either it’s really intelligent or not, if it’s not it’s not a AI in my opinion. And real intelligence wasn’t achieved yet, it’s just programmed wannabe stuff.
 

deesider

Honorable
Jun 15, 2017
298
135
10,890
A Sentient A.I. is a different concept from simple A.I. Simple A.I. has been around for a very long time, e.g., a computer program that can play a decent game of tic-tac-toe is a simple A.I. in my opinion.
The issue I would have with your definition is that it is so broad as to include any computer algorithm, since in principle a program that can play a decent game of tic-tac-toe is no different to a program that plays a mediocre game, or even plays tic-tac-toe very poorly. Not to say this is an incorrect definition, but I would consider it so broad that the term A.I. loses all meaning.

My preferred description of an intelligence is one that can solve a problem it had not encountered before. I'm not aware of any A.I. or ML systems that can do that.

An A.I. developed using machine learning is just a bunch of parameters that emulate a neural network. For instance, an A.I. for recognising pictures of cats takes an input of cat pictures and for each picture outputs a yes or no. The millions or billions of parameters are adjusted over billions of cycles until the output is accurate. This is just a crude algorithm with no intelligence. When this A.I. is implemented it cannot tell if the answer being provided is correct or not - it simply provides an output based on the combination of parameters. If the same A.I. was to be used for recognising pictures of dogs the training would need to start all over again.

On the other hand, if you were to show someone fifty years ago the Watson computer system that can beat everyone at Jeopardy, they would undoubtedly consider it intelligent - so perhaps we are just shifting the bar as technology improves.
 
  • Like
Reactions: KananX
Ah the deep learning buzzwords, like AMD said, deeply overrated. And XeSS is just a joke since it’s unproven tech, just like their alleged GPUs.

over rated or you just did not understand? many people like to compare DLSS like how it was with Gsync vs Freesync. heck people do it with RT before as well. saying that AMD will be able to rival nvidia performance in RT purely on software without needing specific hardware.
 
if you're saying this you still did not really understand how DLSS or XeSS work.

In his defense, AI / Tensor cores are just specialized cores built to handle large vector sets for SIMD/MIMD ops. There is no magic math here. They are just optimized for the workload.

But at the heart of it, AI is just using Matrix math to compare an input vector, to a series to solution vectors to see which one is closest and which if the deviation is acceptable.

But Turing pretty much made the standard for true AI as "When a person is talking with a computer and not realizing they are talking to a computer." We are getting close to that standard.

But if you speak philosophical, true AI won't exist until a computer has the ability to add to it's skillset dynamically on it's own. (ie: Computer: "I don't know painting. Let's learn painting today by looking at paintings") It has to do so of it's own impetus.

<warning side rant below>

But self improvement will never be the case, as computers technically don't have desires that drive self improvement. (IE: Maslows) Today we spoon feed the computer what we want it to know.

That does not mean there are not dangers in this. For example, what if we teach a computer fear and lying? We seen this example in the latest google NLP when the computer said "it feared death" (which is a human concept) We taught it that when we fed it human data. As humans fear death, the computer returned this response as normal for itself. If the computer learns actions to preserve itself, as a human does, then you have a problem. (ie: Skynet) But first it has to be taught this skill (hacking/robotic control) either directly (or more scarily) inadvertently like the fear of death was taught inadvertently.

I've worked on AI in the past using the earlier computer libraries were still in beta. Some to analyze proteins in flight in a mass spec. Others to have machines to design and assemble themselves.
 
Last edited:
  • Like
Reactions: KananX

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
over rated or you just did not understand? many people like to compare DLSS like how it was with Gsync vs Freesync. heck people do it with RT before as well. saying that AMD will be able to rival nvidia performance in RT purely on software without needing specific hardware.
It’s overrated since AMD achieved ~same quality without using tensor cores or alleged deep learning. Do we actually know it uses deep learning? No. It’s probably very comparable to FSR 2.0. The only difference being it’s calculated in specialized cores instead of shaders.
 
It’s overrated since AMD achieved ~same quality without using tensor cores or alleged deep learning. Do we actually know it uses deep learning? No. It’s probably very comparable to FSR 2.0. The only difference being it’s calculated in specialized cores instead of shaders.
with AMD solution there is no AI being calculated even on shader cores. why DLSS end looking much better at lower resolution than FSR? because the AI part augment the upscaling on the missing data part.
 

KananX

Prominent
BANNED
Apr 11, 2022
615
139
590
with AMD solution there is no AI being calculated even on shader cores. why DLSS end looking much better at lower resolution than FSR? because the AI part augment the upscaling on the missing data part.
Than FSR 1.0 perhaps, not 2.0. I stopped caring about 1.0 as soon as 2.0 launched.