The same argument can be applied against several other synthetic benchmarks including Geekbench, 3DMark Timespy, etc. Take Geekbench for example. It also has the 11400 winning against the 5600X in both single core (~1680 Vs 1640) and multicore (8450 Vs 8250) by roughly the same margin as Userbenchmark does. So I don't know how Userbenchmark is singled out and attacked if it is not because of fanboyism due to the controversial weighting in the overall rankings.
Not many consider Geekbench to be a particularly reliable benchmark either, and as a result, you don't tend to see it used in many professional reviews of PC hardware. And when reviews include synthetic benchmarks, they are typically only included as a small part of the total testing suite. You don't see entire reviews based solely around a single synthetic benchmark, but rather a collection of software from a number of fields. The synthetic tests are mainly just there to show potential performance gains in the context of what they are specifically testing.
Userbenchmark, on the other hand, is constantly trying to sell the idea that their synthetic benchmark provides a one-size-fits-all accurate representation of a computer's performance, while repeatedly claiming that everyone else testing real-world software that disagrees with them is wrong. And they clearly show bias toward certain brands of hardware, not just in the way they adjust the weighting of their results, but also in the unprofessional pro-Intel, anti-AMD nonsense they are constantly writing on their site, while repeatedly making claims implying they are the only unbiased source of performance comparisons. And their only response to feedback that their site might have flaws is to move further in the opposite direction while ranting about marketing departments and professional reviewers being out to get them. And that's why people often single them out as being a bad benchmark site, at least as far as unbiased hardware comparisons are concerned.
And I could argue that what happens is exactly opposite. I could argue that Userbenchmark (as well as other synthetic benchmarks like Geekbench) test things in a way that makes better use of modern cpu architectures than most benchmarks used by several reviewers do. Many benchmarks reviewers use are either legacy benchmarks or unoptimised benchmarks using open-source libraries and artificially level the plane field as they don’t make the most out of modern cpus.
They have apparently been around for 10 years. Have they even updated their test algorithm significantly since then? Considering they still include the results of tests performed many years ago in their data set, I doubt they have improved the underlying code much. Doing so would likely require them to get rid of all of the old results. And they are not even very clear about what exact kinds of synthetic workloads their CPU benchmark is testing. All they claim is that they are testing the performance of floating point, integer and memory operations, and don't specify anything more. So it sounds like it's just a very rudimentary test, grinding away at basic CPU operations, probably in a way that's not all that representative of what actual software does in the real world.
Then, while not included in their "effective speed" score, you have their new "eFPS" gaming tests, based solely around a handful of relatively older, mostly e-sports titles that all happen to be ones that have traditionally favored Intel hardware. I would hardly say games like 2011's CS:GO are making better use of modern CPUs than the titles most reviewers are testing with, but according to them, those modern games are "mostly unplayed by real users". The five games they test with might be popular, but they only cover a small segment of the types of games people will be interested in upgrading their processor for, and are not at all representative of today's newer, more multithreaded AAA titles. Something like an i3-9100 might fare rather well in their "eFPS" ranking, with an i9-11900K scoring only 15% higher, but its 4-threads are going to be ill-suited for running many modern games smoothly, let alone those coming out in the near future.
As for why they are like this, it could be that they have a monetary interest in promoting Intel hardware, whether that's direct support that they claim to not receive, or something less direct like holding lots of Intel stock. Or maybe their benchmark is simply based on outdated and simple methods of measuring performance that don't hold up particularly well at providing accurate comparisons between modern architectures, and rather than admit to it and find ways to improve, they simply double down on the claim that their benchmark is perfect while any professional reviews indicating otherwise are flawed and biased. Or perhaps they worked for or otherwise dealt with one of the companies in the past and that is influencing them. Or they just don't handle criticism well.
Whatever the reason, they certainly don't even try hard to hide their bias, so it's kind of hard to take their benchmark seriously for hardware comparisons. Which is unfortunate, since their site actually does a decent job as a diagnostic tool when comparing against others with the same hardware, assuming one can sift through all the redundant information on their results page to find the meaningful results. I also like the idea of being able to compare the relative performance of hardware released many generations apart. Their increasingly questionable practices have made me hesitant to recommend using them though.