Now you're the one who has it backwards, because power is even more indirectly coupled from either clockspeed or voltage. If you set a power limit of half, but the workload has low shader utilization, the clocks (and therefore voltage) might still get ramped up, because the power management controller sees that it has enough headroom to clock up the non-idle units without going over the power limit. That will still cause accelerated wear on those blocks.
If the issue you want to control wearout, then the best solution definitely involves lowering the peak frequencies. I'll bet it would only take a modest clipping of peak clocks. To achieve the same effect by limiting power, you might have to reduce it much more. You might also choose to limit power, based on how much thermals are thought to be a factor.
What part of anything I said involves "hope"? I said measure how they correlate, in order to set limits that favor a longer service life.
I'm not talking about current practice, which I'd expect is mostly centered around balancing against short-term operational costs (i.e. things like energy costs and cooling capacity). I'm talking about what you would do, if you really wanted to increase the service life of these components.
Hey, it's a heck of a lot better than reducing duty cycle! That essentially means letting your hardware idle, where it's wasting both space and still some power!