Generally I like this article, nicely done. But I want more. One game and one synthetic benchmark, doesn't really paint a good representation of how these cards actually perform in real world:
It took quite a bit of time to get everything set up for testing and then run through the 40-ish cards. If/when a new game comes out that's worth testing, I can see about capturing power data. The main difficulty with getting good power data is you ideally need a test that lasts long enough for the GPU to warm up and level off, and a lot of benchmarks only last 60-90 seconds. Metro loops but still has dips. But yeah, it's a topic I plan to keep an eye on, as Metro Exodus for sure isn't representative of all games.
Idle temps were something that were truncated off of the time vs power charts, it would have been nice to know how they compare as computers, mine anyways sit idle for most of the day.
I assume you mean power, and I clipped off that data for these charts. It's interesting to see how the various GPUs behave, though. Newer models are definitely better at getting down to low power idle states faster. Everything GTX 10-series and later or RX 400 and later looks decent, but I know the GTX 780 tended to idle at 40-70W for much longer than the 980, 1080, and 2080. Again, I can see about capturing this for future articles but didn't focus on it here.
Does having an extra monitor change anything?
What about games that use DX12 vs Vulcan vs DX11?
Also having temperature and frequency overlaid with Power would be great.
Probably, see above on games with different APIs (this was Metro using DX12), and I think putting temp, power, and frequency on a single chart would be a mess. I do have the temp, power, fan speed, GPU load, etc. data though -- I just didn't report it here.
How long do cards boost for?
Would be nice to see an at the wall measurement, may be a good way of seeing driver overhead on the CPU.
Do you plan on using some of the power saving features: AMD chill
Boost depends on many things, including the card design and fan speed. Most of the GPUs have relatively stable MHz during gaming tests, but some models definitely fluctuate more. FurMark meanwhile puts enough of a strain on the GPUs that quite a few really throttle down after the first 10-60 seconds.
AMD Chill is mostly out of the scope of this testing, but it's something I could look at in the future. The problem is that it's a LOT of time for each additional type of test, and many of the resulting data points aren't particularly important. Putting dozens of hours into testing for an article that most people won't read isn't a good use of the available resources (namely, me and my time).
In other words, additional testing while desirable for a variety of reasons often isn't practical. Sort of like doing extensive overclocking, undervolting, etc. testing is beyond the scope of a review. The reality of PCs is that probably 99% of users run stock, so that's where our testing time is best spent.