News Controversial benchmarking website goes behind paywall — Userbenchmark now requires a £10 monthly subscription

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Agree with you, the only thing semi useful was whether or not a PC was running in XMP or not. This info can be found very easily in CPU-z.

My own opinion of it is this:

Not only did you get information about the hardware in the PC, RAM and CPU speeds, what drives are present, graphics card, and so on, but quite importantly IMO, background CPU usage. You can ask some unknowing PC user a hundred different ways for them to not understand/comprehend what you are asking FOR because they simply don't know.

The important aspect about UBM is that its results are a composite of every system that has run their benchmark suite. Just like a game engine, some of them are going to be better on AMD or Intel, better on Nvidia or AMD (and I don't think Intel is there yet), but it is telling you that up against every other system tested, your system is "this" percentage of performance against that composite, with this testing method.

The aspect that I cannot understand is why someone would wish to pay to run that benchmark over and over just watching their composite score go down over time. Most users also don't understand that the "decrease" in performance actually isn't their system performing worse, just it falling in the composite rankings against the newer and more capable hardware that are (also) constantly being tested.

As I said before, take with a grain of salt. It had useful features IMO but like many other things that come and go on this ever changing internets of the webz. 😉
 
I just downloaded it. I just have to complete the annoying, time wasting captcha thing. It is very aggravating with a touchpad. Only moderately aggravating with a mouse. At least it is free, I wouldn't pay $10 to use it. I think that captcha thing came out as an extension of their skill test, and I think their skill test might have come out to appease the people who wanted to show that they can get good shooter results in a lower specced system.

I used to use it a lot more often. It was quick, convenient all in one way to check that everything is running well, and to see the results of tuning on a different way of testing. The hoops I have to jump through to run it now are a significant added annoyance though.

The CPU tests do seem to favor Intel, but they also favor normal consumer use and responsiveness for the most part. Biased towards better performance of lower numbers of threads vs scoring flat across them all, and biased towards lower memory latency. Both of these seem like good choices for the non technical user that would use Userbenchmark in the first place. For technical users with workstations there are more specific benchmarks to their use cases that would be more applicable. There are other CPU benchmarks that are far worse, like 3DMark CPU tests in gaming benchmarks that are often wildly disproportional to relative gaming performance.

They are not perfect. I think that the memory latency part of CPU performance should be replaced by a system memory latency score that adds a multiplier for the performance in the more frequently accessed lower sized areas to better reflect the added benefits of large cache. But you can't have everything.

And the CPU reviewers who do the text writeup are definitely Intel biased. Not as bad as trash talking commenters on some sites, but some do sound in the same vein. Maybe the reviewers are trying to push back against what they see as pervasive bias for AMD on many tech/tech media sites? Maybe they are trying to drum up some hype and targeting some benchmark entertainment market? Get free media exposure?

Also the results give you a lot of compliments where other benchmarks just give you numbers. Which also sets this software apart from others with a less technical, more entertainment bias. For example: https://www.userbenchmark.com/UserRun/67297088
If there is visible bias then all the results are null. The Conclusion comments are just copy/past AMD hate to the point of hilarity and add nothing to the sites content other than to inform the user they should go elsewhere for valid, useful results. People here accusing Tom's of bias need to head over to UBM to see what it really looks like. All reviewers will inject some of their personal opinions into their articles but over at UBM...well...words fail me. At least the kind that won't get me banned here. There are better suites to recommend and use. UBM may have been the bastion of PC benchmarking at some point but those days are gone. Let it die, it has become the villain.
 
I've never paid attention to benchmarks. I know what I need and what I want before purchasing a new CPU or GPU. If you follow the hype(rbole) then you are going waste money on hardware you really don't need.
 
If there is visible bias then all the results are null.
So then we can't watch any youtube hardware reviews?
I can understand somebody not liking to hear bias against some CPU they have, but the visible bias seems to be less than forum trashtalk. On a benchmark that seems to be geared towards mainstream entertainment.
Their results seem in line with CPUZ, does than mean we can no longer use CPUZ?

Something I like about UBM is that is geared towards the mainstream, less informed, less analytical user and gives results more representative of someone who uses their pc for a mix of web, office, gaming than the vast majority of other benchmarks out there. Sure if you need to test specific performance on something like Cinebench the corresponding Cinebench benchmarks will be more accurate in that, but they don't translate well to general plebian uses where 4 fast threads will usually outperform 128 slow ones.

The average Joe who bought the family a new PC for Christmas will be completely mislead if he believes that CB23 multi(or most productivity benchmarks) = normal relative performance and power consumption. He will be much better informed by UBM. Except for the (now ubiquitous on the internet) bias.

The results are more valid for the general uses of the majority of PC users than most benchmarks. And, except for the new intro ordeal you have to go through it is easier to run and generally easier to understand. Really a better fit than most benchmarks for the less involved PC user.

The biased opinions are biased and paint AMD in a poorer light than is realistic. Maybe it gets them more downloads? UBM seems like a bench people with prebuilts would use and most of those are Intel. UBM would be better if they toned these opinions down, but they do still have a valid mainstream benchmark.
 
Last edited:
I checked it out after reading this too and I also found no evidence of a paywall. I also did a quick Google search and found this article as apparently the only source of this information, aside from the single Twitter post the article refers to. All the comments out there that I saw are either people ragging on UB or people who have checked out the site and found no evidence of a paywall.

Considering the insanity of charging $10/month or anything really, I doubt this was ever true.
Not sure if it is $10 per year or month. The title "$10 monthly subscription" but the article says "subscribers to the $10-per-year Pro plan can test "
 
So then we can't watch any youtube hardware reviews?
I can understand somebody not liking to hear bias against some CPU they have, but the visible bias seems to be less than forum trashtalk. On a benchmark that seems to be geared towards mainstream entertainment.
Their results seem in line with CPUZ, does than mean we can no longer use CPUZ?

Something I like about UBM is that is geared towards the mainstream, less informed, less analytical user and gives results more representative of someone who uses their pc for a mix of web, office, gaming than the vast majority of other benchmarks out there. Sure if you need to test specific performance on something like Cinebench the corresponding Cinebench benchmarks will be more accurate in that, but they don't translate well to general plebian uses where 4 fast threads will usually outperform 128 slow ones.

The average Joe who bought the family a new PC for Christmas will be completely mislead if he believes that CB23 multi(or most productivity benchmarks) = normal relative performance and power consumption. He will be much better informed by UBM. Except for the now ubiquitous on the internet bias.

The results are more valid for the general uses of the majority of PC users than most benchmarks. And, except for the new intro ordeal you have to go through it is easier to run and generally easier to understand. Really a better fit than most benchmarks for the less involved PC user.

The biased opinions are biased and paint AMD in a poorer light than is realistic. Maybe it gets them more downloads? UBM seems like a bench people with prebuilts would use and most of those are Intel. UBM would be better if they toned these opinions down, but they do still have a valid mainstream benchmark.
None of that changes the fact the results of every benchmark there is suspect. We aren't talking about fit for use in regards to which suite best represents my use case. We are discussing fit for use AT ALL. If I can't trust UBM to present my system benchmark results without bias (The Conclusion comments) then how can I trust my results at all? Did they skew that result positively for my Nvidia GPU and negatively for my AMD CPU? I can't know and therefor I can't put any trust in the results. Is the system summary useful? Sure, but there are innumerable other suites that can do that, other benchmark suites as well. Is UBM accessible for the entry level user? Sure, but that doesn't make it any good, their results are still suspect. New PC enthusiasts are better served using other, more advanced tools and learning what the results may mean from commenters on sites such as this one. Most threads evolve from a poor/odd UBM score to "Post system specs, use HWInfo and post" and ultimately the system is fine, or the problem is revealed by another test suite. At best UBM generates more traffic for Tom's and generates both sites some ad revenue. It's disappointing, as UBM could be a VERY useful tool, like it used to be. Go ahead and use it if you wish, but as someone who does advanced diagnostics for a living I cannot use or recommend such a flawed tool.
 
  • Like
Reactions: bit_user
RE: This morning's update to the article:
When there are free slots, users will have to complete a 3D captcha minigame where the goal is to shoot down 13 ships. The minigame isn’t particularly difficult on the surface, but it can get very tedious as there are very few opportunities for users to actually shoot down any ships. We attempted to complete the captcha ourselves but gave up after a few minutes.
The number of ships that need to be shot down varies. For my first test, it was only 3 or 4, and for my next one it was around 15.

Also, the minigame is pretty tedious, but it becomes less so when when you realize you can move your ship around using WASD (which wasn't immediately obvious, at least to me) and then actively search and destroy the remaining ships.
 
  • Like
Reactions: m7dm7d and bit_user
None of that changes the fact the results of every benchmark there is suspect.
Results of every bench everywhere are suspect since everybody uses different mobos and nobody changes any settings so every benchmark is wildly inconsistent because every mobo uses wildly different settings.
Here are the results of 10 different mobos and we have 10 different results for the exact same CPU.
zfIQvko.jpg
 
I ran a benchmark there 5 months ago, with an AMD 5900X and a 7900XTX system.
My scores were 298% gaming, 108% desktop and 328% workstation.
All 3 results were rated "UFO", the highest possible rating they provide.
Link to results: https://www.userbenchmark.com/UserRun/63733051

This is what they had to say about the CPU and is copied directly from their benchmark site and has nothing to do with benchmarking but all with trying to influence sales towards Intel.

"Despite this clear performance deficiency, AMD supported 3000 series sales with an aggressive and successful marketing campaign to easily outsell Intel over the last 12 months. Given the real performance uplift observed in the 5000 series, and the absence of any meaningful marketing from Intel, we expect CPU sales to shift even further in AMD’s favour. Gamers that do not wish to pay “marketing fees” should investigate Intel’s $175 USD 11400F, the $660 USD savings would be far better spent on a higher tier GPU."

That site is the biggest <Mod Edit> junk testing site I have ever found on the internet, and I hope this latest "change" is the end of them.
 
Last edited by a moderator:
Results of every bench everywhere are suspect since everybody uses different mobos and nobody changes any settings so every benchmark is wildly inconsistent because every mobo uses wildly different settings.
Here are the results of 10 different mobos and we have 10 different results for the exact same CPU.
zfIQvko.jpg
Indeed, but when the benchmark suite itself is suspect? That adds a whole new dimension. At least here, in the case of the motherboards we should be able to manually standardize settings and hopefully achieve results within a few percentile. When the benchmark picks and chooses who wins, what use is that? Testing suites and methodology must be neutral, consistent, and standardized, so we can consistently replicate the above results and correct for the different settings.
 
  • Like
Reactions: bit_user
Yes, of course, but it comes down to both Tom's and UBM earning money, and possibly so for their survival (maybe not Tom's so much any more). This is the only way UBM can make money. Can we blame them for trying?
I recently noticed TechPowerup started asking users for donations. Not sure how long that's been going on, but I'm already a voluntary subscriber to a couple other sites.

I think the key to long-term success is to keep it small. GamersNexus seems to have the right idea. Don't sell out, which Toms and Anandtech both did, a long time ago. That puts the publication's fate in the hands of a corporate entity that cares more about quarterly revenues than journalistic integrity or serving the interests of readers.
 
The important aspect about UBM is that its results are a composite of every system that has run their benchmark suite. Just like a game engine, some of them are going to be better on AMD or Intel, better on Nvidia or AMD (and I don't think Intel is there yet), but it is telling you that up against every other system tested, your system is "this" percentage of performance against that composite, with this testing method.

The aspect that I cannot understand is why someone would wish to pay to run that benchmark over and over just watching their composite score go down over time. Most users also don't understand that the "decrease" in performance actually isn't their system performing worse, just it falling in the composite rankings against the newer and more capable hardware that are (also) constantly being tested.
I think there are two separate questions, that are each important:
  1. How does this system compares to others with similar hardware components?
  2. What's the relative performance of this system, within the current PC landscape?

The first is a valuable trouble-shooting tool, for instance: letting you know if your memory performance is way worse than others with the same CPU. This could surface problems with memory configuration or maybe someone just using like way-slow memory with a fast CPU. Another example I can imagine is a CPU that's getting heat-soaked by the GPU or has a poorly-installed heatsink.

The second should be split into several categories. First, it should window the comparison to just those submissions within the past year. It makes no sense that you'd be comparing to submissions from 10 years ago (unless that's what you want, but then you should have to explicitly specify that). Next, it should allow you to refine the comparison by criteria like laptop/desktop/workstation/server, iGPU/dGPU, Intel/AMD, Windows/Linux, SATA/NVMe, overclocked/stock, etc.
 
  • Like
Reactions: punkncat
My main problem with it is how the results are presented.

A system can be both "Below expectations" and "Good - 173%"...at the same time.

Confusing to the uninformed.
Yes, confusing. Why uninformed, though?

What seems confusing to me is their mixing of both percentiles and their usage of percentages as a measure of relative performance. I wouldn't do that. I'd have percentiles, but then say "performs 1.73x as fast as the mean submission".

Also, some transparency should be provided into how they determine "good". Perhaps they mean it won't be a likely bottleneck with games? If so, they need to say which games they're using as a standard, because this will change over time.
 
I think there are two separate questions, that are each important:
  1. How does this system compares to others with similar hardware components?
  2. What's the relative performance of this system, within the current PC landscape?

The first is a valuable trouble-shooting tool, for instance: letting you know if your memory performance is way worse than others with the same CPU. This could surface problems with memory configuration or maybe someone just using like way-slow memory with a fast CPU. Another example I can imagine is a CPU that's getting heat-soaked by the GPU or has a poorly-installed heatsink.

The second should be split into several categories. First, it should window the comparison to just those submissions within the past year. It makes no sense that you'd be comparing to submissions from 10 years ago (unless that's what you want, but then you should have to explicitly specify that). Next, it should allow you to refine the comparison by criteria like laptop/desktop/workstation/server, iGPU/dGPU, Intel/AMD, Windows/Linux, SATA/NVMe, overclocked/stock, etc.

I agree. In my understanding of the way their benchmark works, the second scenario is the actual composite, while "other systems with the same hardware" is a secondary consideration.
 
  • Like
Reactions: bit_user
I ran a benchmark there 5 months ago, with an AMD 5900X and a 7900XTX system.
My scores were 298% gaming, 108% desktop and 328% workstation.
All 3 results were rated "UFO", the highest possible rating they provide.
Link to results: https://www.userbenchmark.com/UserRun/63733051

This is what they had to say about the CPU and is copied directly from their benchmark site and has nothing to do with benchmarking but all with trying to influence sales towards Intel.

"Despite this clear performance deficiency, AMD supported 3000 series sales with an aggressive and successful marketing campaign to easily outsell Intel over the last 12 months. Given the real performance uplift observed in the 5000 series, and the absence of any meaningful marketing from Intel, we expect CPU sales to shift even further in AMD’s favour. Gamers that do not wish to pay “marketing fees” should investigate Intel’s $175 USD 11400F, the $660 USD savings would be far better spent on a higher tier GPU."

That site is the biggest <Mod Edit> junk testing site I have ever found on the internet, and I hope this latest "change" is the end of them.
I did not see that biased quote in your benchmark results. I know they exist if you search for particular CPU information, but your CPU benchmark results seem fine and fair compared to mine that I have linked on post #26 of this thread. By fair I mean consistent with the relative differences between a ~4.65GHz 5900X and a 13900kf running at 5.8GHz all p-cores compared to most other benchmarks. The result seemed fine but it seems you are offended by biased description found elsewhere in the site.

Is there a better alternative benchmark for representing real world performance for the casual PC user? PCmark10? Passmark? Those are probably the 2 biggest competitors and both have strengths and weaknesses. UBM seems to have the most relevant information for me.

The stress test type ones are better for checking your performance with specific stress test type uses and of course particular game benchmarks will be better with particular games, but those two groups are not good representatives for general casual use like UBM is targeting. Particularly the stress test type benchmarks. It is disingenuous to represent a rare load as a common one.

If there was a much better casual use performance benchmark then there would be less of a reason to use UBM. And for some to put up with their CPU getting insulted if they go to the wrong page. But hey, welcome to my world when I go to the comments section on most tech websites 😛 At least they don't have memes yet.

As an aside, I like to use Intel MLC for a real quick latency check when I'm messing with timings: https://www.intel.com/content/www/us/en/download/736633/intel-memory-latency-checker-intel-mlc.html
It likely has issues with Ryzen, but it's free so that's good for some people at least.
 
I tried to run it just now.
It updated automatically
It now says servers busy
Sign in (PRO) or cancel.

Old score for me.
So rank in same hardware makes a big difference.
If you are far left something is wrong.
Far right very tweeked system/overclocked.
My 5600x scores close to 5800x or12500=12600 scores.
No overclocking just tweeked and cooled well.
https://www.userbenchmark.com/UserRun/63773350
I had the same deal until I downloaded the new version from the site. Then I had to waste time shooting some junk before it ran.
 
I did not see that biased quote in your benchmark results. I know they exist if you search for particular CPU information, but your CPU benchmark results seem fine and fair compared to mine that I have linked on post #26 of this thread. By fair I mean consistent with the relative differences between a ~4.65GHz 5900X and a 13900kf running at 5.8GHz all p-cores compared to most other benchmarks. The result seemed fine but it seems you are offended by biased description found elsewhere in the site.

Is there a better alternative benchmark for representing real world performance for the casual PC user? PCmark10? Passmark? Those are probably the 2 biggest competitors and both have strengths and weaknesses. UBM seems to have the most relevant information for me.

The stress test type ones are better for checking your performance with specific stress test type uses and of course particular game benchmarks will be better with particular games, but those two groups are not good representatives for general casual use like UBM is targeting. Particularly the stress test type benchmarks. It is disingenuous to represent a rare load as a common one.

If there was a much better casual use performance benchmark then there would be less of a reason to use UBM. And for some to put up with their CPU getting insulted if they go to the wrong page. But hey, welcome to my world when I go to the comments section on most tech websites 😛 At least they don't have memes yet.

As an aside, I like to use Intel MLC for a real quick latency check when I'm messing with timings: https://www.intel.com/content/www/us/en/download/736633/intel-memory-latency-checker-intel-mlc.html
It likely has issues with Ryzen, but it's free so that's good for some people at least.

The point I was trying to make was that it discussed the Ryzen 3000, and more or less laid the success for AMD at their aggressive marketing campaigns, not the actual processor design and performance.

It then continued to imply the same will be true for the Ryzen 5000, as Intel is barely marketing their CPU's.

It then concludes that gamers who do not wish to pay "marketing fees" should get the 11700F over the 5900X. It's basically recommending to purchase an 11700F over a 5900X for gaming, and put the difference ($660USD?) towards a better GPU. It thereby totally ignores the CPU bottleneck you would experience when pairing a higher end GPU with a lower end CPU.
It's a bad recommendation that has it foundation in cost, not performance.

At the same time the 5900X that was benchmarked is described as:

"With an outstanding single core score, this CPU is the cat's whiskers: It demolishes everyday tasks such as web browsing, office apps and audio/video playback. Additionally this processor can handle intensive workstation, and even full-fledged server workloads. Finally, with a gaming score of 112%, this CPU's suitability for 3D gaming is outstanding."
 
  • Like
Reactions: bit_user
Something I like about UBM is that is geared towards the mainstream, less informed, less analytical user and gives results more representative of someone who uses their pc for a mix of web, office, gaming than the vast majority of other benchmarks out there. Sure if you need to test specific performance on something like Cinebench the corresponding Cinebench benchmarks will be more accurate in that, but they don't translate well to general plebian uses where 4 fast threads will usually outperform 128 slow ones.
Every major review site I read has a mix of tests that includes both web/application performance, gaming, and rendering/compute. If you never click on the web, office, or application performance page(s), that's on you.

BTW, web browsers have been heavily-threaded for many years. They benefit from multiple cores, even if not as much as CineBench.
 
The point I was trying to make was that it discussed the Ryzen 3000, and more or less laid the success for AMD at their aggressive marketing campaigns, not the actual processor design and performance.

It then continued to imply the same will be true for the Ryzen 5000, as Intel is barely marketing their CPU's.

It then concludes that gamers who do not wish to pay "marketing fees" should get the 11700F over the 5900X. It's basically recommending to purchase an 11700F over a 5900X for gaming, and put the difference ($660USD?) towards a better GPU. It thereby totally ignores the CPU bottleneck you would experience when pairing a higher end GPU with a lower end CPU.
It's a bad recommendation that has it foundation in cost, not performance.

At the same time the 5900X that was benchmarked is described as:

"With an outstanding single core score, this CPU is the cat's whiskers: It demolishes everyday tasks such as web browsing, office apps and audio/video playback. Additionally this processor can handle intensive workstation, and even full-fledged server workloads. Finally, with a gaming score of 112%, this CPU's suitability for 3D gaming is outstanding."
The first part you quoted was likely from a general processor summary. Not something that you would see from just running the benchmark. I agree that those are biased and not credible in a general sense any more than implying an i9 will consume 330w most of the time for most users is. Neither statements should be taken seriously, but that also shouldn't disqualify everything else on sites that make either of those statements or implications.

As an example the statement you quoted from your benchmark seems largely accurate. And with the gaming score, would you estimate that your CPU is 112/129x as fast as an average 13900k? 13900k averages 129 in gaming. 3Dmark would predict it is much less by the physics score. I do admit that UBM isn't good with predicting the gaming performance of 3D cache chips, but what general bench is?

But call them out on biased statements in general processor descriptions. It is a bit of a stain on that site, but maybe they are doing it for free bad publicity. Also I just tested with Geekbench 6 and my CPU was 8.5% faster than the average same chip vs 7.7% faster on UBM.
Every major review site I read has a mix of tests that includes both web/application performance, gaming, and rendering/compute. If you never click on the web, office, or application performance page(s), that's on you.

BTW, web browsers have been heavily-threaded for many years. They benefit from multiple cores, even if not as much as CineBench.
I'm just talking about easy system benchmarks for typical home users to test their own equipment. Userbenchmark isn't something that a reviewer would use unless they were very lazy.

And from a useability standpoint web browsers and office are light enough that they need faster internet, memory and hard drive access more than many cores. Will 192 threads at 3.7GHz (9654) be faster than 8 threads at 4.7GHz (14100f) in light use? It sure as dirt will bench a lot higher in most benchmarks.
 
  • Like
Reactions: m7dm7d
The first part you quoted was likely from a general processor summary. Not something that you would see from just running the benchmark. I agree that those are biased and not credible in a general sense any more than implying an i9 will consume 330w most of the time for most users is. Neither statements should be taken seriously, but that also shouldn't disqualify everything else on sites that make either of those statements or implications.
It's actually worse than claiming a Raptor Lake i9 will use 330 W, because there are at least some situations where the latter is factually accurate.

The other thing about your statement is that you seem to be contrasting what the official benchmark site says with what random forum posters say, which is also a false equivalence.

And from a useability standpoint web browsers and office are light enough that they need faster internet, memory and hard drive access more than many cores.
You'd be surprised how much ads can bog down a web page. Not only that, but anti-malware makes it even worse. I have a 16-core/24-thread Alder Lake CPU in my work laptop, and Chrome bogs down even worse on that machine than when I use firefox on my Sandybridge w/ anti-spyware. Same web sites, same internet connection!

Will 192 threads at 3.7GHz (9654) be faster than 8 threads at 4.7GHz (14100f) in light use? It sure as dirt will bench a lot higher in most benchmarks.
What the heck are you even talking about?
 
  • Like
Reactions: Order 66