Lots of questions here.
Firstly, FidelityFX is only a few days old in the news cycle. Not much testing has occurred, so it will be a bit before independent testing and reviews happen. Keep an eye on the tech news.
They do operate very differently. DLSS is a predictive performance saving measure that uses data from the previous frame to guess what the next frame should look like. The Deep Learning part is that this behavior is trained into the system by itself, and not really input by anyone. It is also running on bespoke hardware within the Nvidia GPU that is separate from the standard rendering pipeline.
My understanding is the FidelityFX is basically a post processing step, and is run on the same hardware as other techniques would offer. Some have compared it to DLSS 1.0, which was not widely accepted as being worthwhile over more traditional anti-aliasing techniques. I expect the same here. AMD's next stab at it will probably be more worthwhile.
Mobile processors with AMD graphics built-in, yes, should support this technology. Apparently, also anyone that wants to implement it, including Nvidia and Intel if they so chose.
Game developers will have to make the option available I believe.
Keep in mind that anything like this will have a performance impact. The idea behind DLSS and FFX is that they have LESS impact on performance while increasing some aspects of image quality. You may be better off not using this on anything but the easiest to run titles.