For the entire memory system, yeah? Not per chip.There's a reason the gddr6x chips get got.
View: https://www.youtube.com/watch?v=HqbYW188UrE
explains gddr6x and power draw. It's absolutely not in the single digits watts.
For the entire memory system, yeah? Not per chip.There's a reason the gddr6x chips get got.
View: https://www.youtube.com/watch?v=HqbYW188UrE
explains gddr6x and power draw. It's absolutely not in the single digits watts.
When there's factors such as differences in process nodes and total board power, how is that a fair comparison?
The smaller the node > greater thermal density > more difficult to cool(may warrant more specialized cooling solutions).
Among the current halo(6950XT) Radeons, many have their total board power capped between 300-330w. Nvidia has matched and passed that with the 3070Ti...
Memory doesn't use much power at all - in the single digits(w), but the higher board power SKUs see them bathe in more heat. GDDR6X having higher operating thermals than R6 doesn't help.
The biggest problems for Nvidia was GDDR6X temps, not GPU temps. Partner cards were way better than Founders Edition for the most part (though some AIB cards sucked even worse, true). I replaced thermal pads on a Gigabyte Vision 3080 and the 3080 FE, both dropped VRAM temps 10-20C. I rarely see GPU temp on 3080/3090 go above 80C, and often closer to 70C.
Now, if you fire up Ethereum mining (back when that was a thing), Nvidia GDDR6X temps shot up to 110C on many cards, definitely on the Founders Editions. I even tried replacing thermal pads on the 3080 Ti FE VRAM, and while it helped a bit, clearly that cooler design wasn't keeping up with 12 GDDR6X chips pumping out the heat.
In fact, GPU temps did go up on all the cards where I replaced thermal pads on the VRAM. That makes sense, as suddenly the VRAM was able to dump more heat into the cooler and that affected the GPU. But the GPU temps were still in the safe range.
I didn't say all of their cards are the same and that they don't work for anyone. They would be out of business for a long time by now if that was the case. Still, I feel that I am free to avoid products I have bad experiences with. What you do is your thing. To answer your question, I would do the same with Intel as I did with AMD; three strikes and out.That's your experience, as unfortunate as it was(sorry to hear that), it doesn't make or mean the entire brand is bad for everyone else, presently or later on.
What about those currently using Radeon without a hitch? Some have bad experiences with Nvidia too, though underrepresented.
There can be heat issues with models from both sides, so that one is a bit lost on me.
Will you still lock yourself to one option(Nvidia) if Intel has those same issues with future generations?
That is quite interesting, my 3070Ti, also with GDDR6X VRAM, runs at around 60-65°C and 70-75°C hotspot, with VRAM temps basically perfectly in the middle of them. No pad exchange required. Highest I have seen was 75°C on average on a very hot summer day I think. Still pretty save. I'm not sure what the issue with bigger cards is exactly, if the issue lies with early models or a general construction error. I don't have a FE card, though, so that might point at the FE being not so well constructed, or at least the pad being bad?The biggest problems for Nvidia was GDDR6X temps, not GPU temps. Partner cards were way better than Founders Edition for the most part (though some AIB cards sucked even worse, true). I replaced thermal pads on a Gigabyte Vision 3080 and the 3080 FE, both dropped VRAM temps 10-20C. I rarely see GPU temp on 3080/3090 go above 80C, and often closer to 70C.
Now, if you fire up Ethereum mining (back when that was a thing), Nvidia GDDR6X temps shot up to 110C on many cards, definitely on the Founders Editions. I even tried replacing thermal pads on the 3080 Ti FE VRAM, and while it helped a bit, clearly that cooler design wasn't keeping up with 12 GDDR6X chips pumping out the heat.
In fact, GPU temps did go up on all the cards where I replaced thermal pads on the VRAM. That makes sense, as suddenly the VRAM was able to dump more heat into the cooler and that affected the GPU. But the GPU temps were still in the safe range.
The Founders Edition 3070 Ti without any mods got GDDR6X temps well above 100C if you tried mining. I think even FurMark got it to 100C, and that was only with eight chips. Better cards did not get as hot, obviously, but the FE designs were not awesome IMO.That is quite interesting, my 3070Ti, also with GDDR6X VRAM, runs at around 60-65°C and 70-75°C hotspot, with VRAM temps basically perfectly in the middle of them. No pad exchange required. Highest I have seen was 75°C on average on a very hot summer day I think. Still pretty save. I'm not sure what the issue with bigger cards is exactly, if the issue lies with early models or a general construction error. I don't have a FE card, though, so that might point at the FE being not so well constructed, or at least the pad being bad?
Meanwhile, all AMD cards I had, and all tests I have seen, ran higher. Worst was the card I had that ran to 120°C and then shut down the PC - in GW2, which really isn't all that graphically demanding and never was.