Can you support this accusation? I don't know of any Mac review websites owned, run by, or affiliated with John Poole or his company, Primate Labs.
He ran a site called "Geek Patrol" for a number of years that was focused on Macs, though I don't think it's been online for while. You can still find it at archive.org, though newer entries appear to just be from squatting sites after they either sold the domain or let it expire.
That's not to say they are necessarily purposely giving Apple hardware an edge, but with the developer having an Apple-centric background, there's definitely room for some amount of bias to enter there.
Geekbench contains no synthetic benchmarks, it's a collection of real world compute intensive tasks. Lots of its tests are packaged up open source programs, such as compression/decompression and a C compiler.
https://www.geekbench.com/doc/geekbench5-cpu-workloads.pdf
Right off the bat, I can see they are compiling the benchmark software using the native Xcode compiler for MacOS and iOS, developed by Apple, while for the other platforms they use the Clang compiler... which was also originally developed by Apple, even if it has been maintained by the open source community for a while. Still, Mac hardware could potentially be getting an edge in the results out of that, considering Apple's close ties to the compilers used. It's not like they are using the Intel compiler for Intel systems in the same way. And the majority of software for Windows has not been compiled with Clang, so it's not necessarily going to be representative of compiler optimizations found in most software. That's not necessarily a fault of the benchmark software, as they are probably trying to keep the compiler as similar as possible across devices, but it could potentially be beneficial to Apple.
Then I see they arbitrarily weight the categories of tests, with the cryptography test amounting to 5% of the total score, integer tests amounting to 65% and floating point tests 30%. How they decided on those arbitrary weighting values is anyone's guess, and it sounds like all tests within those categories are given the same weight. Needless to say, any benchmark that tries to represent overall CPU performance by a single score in a one-size-fits-all approach is not going to represent the different ways people use computers. Who cares if a given processor is faster at certain tasks that push it into the lead over another, if those are not the tasks you will be using it for?
And I would describe these tests as "semi-synthetic". They do utilize some open-source routines to perform real-world tasks, but not necessarily in ways that will be entirely representative of how actual software will handle the same routines. For that, one would need to run tests on actual, complete software, rather than simply running a quick test that spits out a single number derived from an arbitrary collection of tests.
And of course, any prior x86 software for Macs not optimized for the M1 processors will be getting emulated, which can in many cases result in significant slowdowns, but would not be something represented by these benchmarks.
The benchmark itself might potentially be okay for making rough comparisons of similar hardware architectures with similar operating systems, but when you are changing up everything, there are far too many variables to derive meaningful results. The new Apple chips seem fine for what they are, but one shouldn't put too much weight on these numbers.