I forgot what it was for the Trinity A10-5800K, but when memory frequency is set higher than 1600 or 1866 MT/s, the performance gains diminish in gaming (GPU workloads). That's what I was referring to.
I also had a thought. If, hypothetically, 64 bits get prepared to be transferred 2,133 times a second, then the 64-bit 2,133MT/s would transfer those batches of data smoothly (theoretically), but the 128-bit 1,066MT/s (which has the same theoretical bandwidth) would transfer the 1st 64 bits of data fine, but come the 2nd batch, it wouldn't be ready to transfer the data yet due to its half transfer rate. Though, come the 3rd batch (which sums the to-be transferred data to 128 bits) it should transfer all of the data fine. This way though, it doesn't do it as smoothly as the first system, for this hypothetical situation. Thus the data possibly not getting as early to wherever it needs to be (to be processed maybe).
This makes me think that faster (transfer rate) but narrower (bus width) is more flexible with different workload situations than slower but wider.
I'm not sure if memory works the way I described at all. If it did, I wonder if this could manifest itself with end-performance at all though, and if so, if it would be noticeable, how often would it take effect, and what kind of software or situation it would take to make it show.