Two questions, then:
- Where is this information coming from about Arc's TDP (or TBP?) being different than Nvidia's or AMD's?
- Where is that 190W number coming from? I just did a brief search, and Intel specifically states 225W (though the Arc 750 product page I found didn't list specs at all)
In fact, if you hit the little question mark icon, it brings up a modal window specifically stating:
(emphasis mine)
1. You misread what I wrote on this one. I have cards from all 3 and I see how they behave. In most games Nvidia and AMD cards will max out the power limit when the cards are fully utilized. AMD cards generally get better performance when undervolted and Nvidia cards when undervolted by adjusting the volt curve for this reason. This is well established. Intel cards rarely are power limited and get better performance when overvolted. Overvolting is so alien to AMD and Nvidia GPU users that you must have thought I meant something different. I did not say that the TDP is different as being some different measurement, I meant that it was used in application differently. Intel's TDP is set way too high for most uses. Some synthetic benchmarks use it though.
2. The 190w number came from TDP, just as I stated. It is listed in my Arc oc software and GPUz under current and default power limit but refers just to the GPU chip. I just assumed that others were using TDP as well, which was in error. There were also a number of stories 3 years back stating that AMD was misrepresenting their numbers on this particular generation, like this one:
https://www.igorslab.de/en/graphics...with-nvidia-and-nearly-impossible-with-amd/5/ which may or may not have been correct.
But, since it seems I may have been incorrect in my initial assessment, and since it is an easy problem to fix: (225w-190w=35w and 130w-35w=95w) The 35w is conservative since the power delivery and cooling fans will use less power with the less power the chip uses, but I can still go with 35w less than 130w used by the 6600 and limit the core of my A750 to 95w since the slider in the software goes that low. That way the A750 core of 95w + the rest of the stuff at less than 35w should be less than the stated 6600 132w TBP.
Timespy at 95wTDP gave a graphics score of 7858 which is a bit less than the Guru3d 6600 review of 8071, so I increased the performance boost slider in Arc oc panel and got 8497 graphics at 95w. (The fps were 56% faster with stock than 95w and 45% faster at stock than 95w oc so the most power hungry use suffered the most from a dropped power limit)
In CP2077, at 95w, 1440p ultra, canned bench the A750 got 46.75fps and with the oc it got 51.03fps at 95w. I don't know how many the 6600 gets at these settings, but I do know that the reference A750 gets 39% more fps than the 6600 at 1440p here:
https://www.techpowerup.com/review/intel-arc-a750/11.html and that the stock reference A750 gets 42% more fps than the 95w A750 and 30% more fps than the oc 95w A750 on my pc so the 6600 would fit in between those.
In Doom The Dark Ages, at 1080p ultra nightmare, Xess performance, 95w dropped the Hebeth bench down to 52.49fps and oc 95w dropped it to 55.90fps. The stock A750 is only 14.4% faster than the 95w and 7.5% faster than 95w oc. To compare the 95w TDP (130wTBP) 750 to the 6600 I will have to do a little extrapolating. In that previous link for CP2077 both the A770and A750 were shown and the A750 was 92.9% as fast as the A770 at a GPU, but not VRAM limited 1440p. Doom The Dark Ages shares both of these characteristics so I took the A770 value and multiplied it by .929 to compare it to the 6600 at 1080p:
https://www.techpowerup.com/review/doom-the-dark-ages-performance-benchmark/5.html
The approximated stock A750 is 30.7% faster than the 6600 there.
These scenarios favor Intel, but new games will be harder to run than they were in 2022 and 1080p 2025-2027 games will likely treat GPUs like 1440p 2022 games. And I'm not trying to prove that the A750 is more efficient than the 6600 (although it is in these uses), just that it is not a huge, deal breaking difference. The continuing relative decline of the 6600, in addition to bad upscaling and pitiful raytracing performance should be when games are just going to use that stuff more and more in the future. Even AMD is pushing it.