that is isn't going to be getting that same result if it is retested in six months to a year after being hammered for a while.
The problem with this is that these claims are not quantifiable, or at least no one have tested this AFAIK. I understand that very much like with relying on single review per lineup for the purposes of tiering a whole lineup there might be problems asising from underrated components choice but i just don't see a way to incorporate that into the methodology
If there were actual inconstancies, I'd know about it. I think the inconsistencies are in the reviewer's test equipment and methodology.
I don't know that the target for ripple on it but we have 53mV ripple on 750W SKU review, then 27mV on 650W one and then back to 42mV at 550W one, looks pretty inconsistent to me. Two last ones also have identical production date part of the LOT code so they're supposedly the same batch ?
I mean, I can make a complete garbage PSU that will blow up in under a year, but if the ripple is low it's tier A?
The other way around, ripple isn't the only thing to PSUs obviously, but if it's a good PSU overall but still has higher ripple than most units supposedly on it's level then sorry but it's not. Now we don't have a hard data on how important ripple actually is but as far as i can tell the general consensus is that somewhere lower than 50mV it ceases to matter much therefore that were our setting point for high-end stuff. But i'm open to feedback from you obviously, we haven't really asked anyone about this yet, it's just IIRC that both Aris and Jeremy have their preferences somewhere around this number too.
Now back to low ripple garbage unit, of course if there would be some known problems with some units blowing up, failing frequently - we'll detier it as soon as we'll have such feedback. But i reiterate for another time, it's not that easy to see just from components breakdown if some particular design is good or bad, there needs to be a testing and reviews and users feedback is the closest thing we have to that.
But sure, i think it would be reasonable to incorporate your feedback GPX at least, we'll discuss that internally.
Personally, I think it should be 3 Levels
So what are criteria you would place for a units to be in level 1 ? We need something quantifiable, not just "it's Seasonic Prime - it's good". Very much like with reviews on any other hardware, PSUs aren't much different. If hard data shows that something should be good - there still may be QC and reliability issues, look at Enermax AIOs for example, they've performed very good on GN reviews, and yet, what a disaster they proven to be reliability wise. If we'll have such bad feedback on some PSUs that seem to perform good in reviews then we'll reflect that in the list no questions, and we already did, look at Seasonic Focus, EVGA G3 and a whole bunch of other units out there that ended up detiered to lower tiers or put in low priority subtier due to issues of various magnitude.
Basically, what you're proposing here we already have, level 1 are tier A gold units, they're all well reviewed, have good performance and we're confident in them. Level 2 is like anything else in tiers A, B, C and some select units from tier D but there are various levels to budget units and thus we don't just put them in one big pile. Level 3 are pretty everything else from tiers D and E.
It's just like I do a PSU review for every product except for writing a review would be a conflict of interest.
Exactly ... like, it would be foolish still to ignore your data but with we can't just work with these random claims. Or rather we can, but we need more of them to paint the whole picture, and well, it needs to be public, which i guess isn't possible anyway.
Edit: Or at the very least we could probably use that data under NDA you / Corsair should be willing to share something in the first place. So we could confidently say 'it's bad' based on hard data without sharing that data per se.