Psyintz :
Sure will! What's the best method for doing so? Do you prefer the spec log from a specific application, or? I apologize. I'm quite new here.
For starters: The benchmark program must be the ONLY thing running on the computer. Leaving _anything_ else running is likely to impact benchmark performance.
Secondly: What is the actual state of the drive itself?
Most published benchmarks are for "fresh" (or freshly erased) SSDs. Some are run at "50% full" (whewre the definition varies). If you have a nearly full drive then performance will often falter badly.
Published testing methodology is important and it must be repeatable
Thirdly: do you have "fresh" drive data to compare against? Every time you change the hardware configuration you need to re-benchmark (before/after comparisons).
This is important as a lot of sata controllers are pretty rotten and become major bottlenecks for a SSD.
It's good that Toms are doing long-term tests, but you do need to take into account that they erase the drives for each test cycle to ensure they're in a known state (this is the only way to run long term tests). What that means is that "your milage may vary", especially if your drive is low on available blocks.
"real world" tests are often very different to lab ones. It's hard to replicate a real world environment and real world loading in labs.
That means: For all intents and purposes, a benchmark is merely a nod in the direction of whtat you'll see in service - and running benchmarks on a "real world" system often gives results at odds with the actual performance seen for any given application (sometimes benchmarks say it should be great and it's not, or vice vsersa)
_Everything_ is about compromise.
As a case in point:
I admin academic scientific cluster systems with thousands of Tb of shared storage and hundreds of CPUs. As loads get cranked up, they often fail in ways that the software authors never even dreamed of, let alone devised tests for. This makes deploying new technology a white knuckle ride, long after the equipment has bedded in - and of course people deploying new software seldom bother checking with us beforehand, so the first we know of a new failure mode is alarms and/or angry complaints that XYZ doesn't work
A lot of the time we find that what they're trying to do works on a single server/drive system but scales to larger sizes or multiuser systems spectacularly badly - so we have to teach them how to optimize for what's effectively cloud computing and storage. Having gone through that optimization process you may find that performance on the original testbed system is better - but generally it's significantly worse, because the software no longer monopolises all available resources on the smaller system.
On a single-user machine running batch processing, noone cares if interactive performance suffers, but if 30 other people are using the systems and interactive performance suffers (or other processes stop completely) it's a different matter. SImlar things apply to disks - both spinning rust and SSD