And that should just support prior suggestions that synthetic benchmarks like Passmark should not be relied upon to provide an accurate representation of performance in real-world applications, as sometimes their scores will see more benefit from certain hardware architectures than from others, in ways that won't always align with other software.
And again, the scores don't make much sense even going by the small difference in clock rates compared to the architecturally similar 5800X, unless there are a lot of poorly configured 5800X3D systems among those submissions. Looking at the distribution of individual 5800X3D results, their overall CPU Mark scores are separated between a relatively tight grouping of higher scoring results, clearly the properly configured systems, and a separate much slower grouping of results, with the group of properly configured systems performing on-average over 25% faster than the misconfigured systems that are holding back results. And while Passmark might want you to believe that 21 submissions from random online users is enough to provide a "low" margin of error, that's complete nonsense. We don't know who tested these systems. Someone could have tested a number of different RAM configurations to see how they affect performance, or an early OEM prebuilt system could be getting sold with slow, single-channel memory, or someone could even be purposely submitting misconfigured results to manipulate the numbers, for all we know.
As for the possibility of these mixed results being the norm, that doesn't appear to be supported by another synthetic benchmarking site that, despite being very obviously anti-AMD right down to their CPU descriptions, shows expected results relative to the 5800X. According to their database of 304 user test runs, the single-threaded performance of the 5800X is only 6% faster than the 5800X3D, as might be expected for an application not benefiting from the additional cache. Passmark is likely a more reputable site, even if they get less traffic, but that's still more evidence suggesting that the current Passmark results are not being representative of real-world performance, and that they are at the very least getting thrown off by a number of questionable submissions affecting their small number of samples.