The original 150W power consumption number definitely seems a little weird, relative to the 5700. I understand that trying to push performance higher results in an increase in power consumption greater than the increase in performance - I would've assumed that cutting things back would result in a better performance/power-consumption profile....
The one example I can think of is the Vega 56:
Using the secondary BIOS with a power limit reduced by 25% gets us 159.4W and 32.7 FPS. Compared to the stock settings, just 71.6% of the power consumption serves up 89% of the gaming performance.
So I find the estimates a little baffling. Still, we don't know exactly what the 5500's performance will be.
This is probably a VERY sloppy way of doing this, but, I'm going to make some guesstimates based on the scoring of various cards on the GPU hierarchy chart relative to the 5700 (score of 87.5), and then multiply the 5700's TDP (180W) by the resulting number. Obviously, this assumes that the score on the hierarchy chart is absolutely accurate, and that power consumption is directly proportional to performance as measured by that score number - take this with an enormous boulder of salt, in terms of how good this estimate might be.
If the RX 5500 performs the same as: | Performance score | Score percent (Perf score/Perf score-of-5700) | TDP estimate (180 x Score percent) |
RX 5700 | 87.5 | 100% | 180W |
GTX 1660Ti | 71.4 | 81.6% | 147W |
GTX 1660 | 63.6 | 72.7% | 131W |
RX 590 | 60.7 | 69.4% | 125W |
RX 580 8GB | 57.9 | 66.2% | 119W |
GTX 1060 6GB | 53.2 | 60.8% | 109W |
RX 570 4GB | 48.3 | 55.2% | 99W |
GTX 1650 | 42.2 | 48.2% | 87W |