News EVGA Abandons the GPU Market, Reportedly Citing Conflicts With Nvidia

Page 8 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Sep 10, 2022
113
17
95
When there's factors such as differences in process nodes and total board power, how is that a fair comparison?

The smaller the node > greater thermal density > more difficult to cool(may warrant more specialized cooling solutions).

Among the current halo(6950XT) Radeons, many have their total board power capped between 300-330w. Nvidia has matched and passed that with the 3070Ti...
Memory doesn't use much power at all - in the single digits(w), but the higher board power SKUs see them bathe in more heat. GDDR6X having higher operating thermals than R6 doesn't help.

Well I am not trying to compare the two I was just stating fact since someone mentioned how AMD full of thermal and heat issues. And you simply just confirming reasons as in why it’s generally hotter

I said, I know how people complaining about AMD drivers and I also know, how people keep complaining about NVIDIA temp, for whatever they complain it ain’t really my business to find a solution. It’s just, the complain is always there I confused really nothing comes in perfect. If wanting anything really cool and really fast we should all just get those top range cards, water cooled.
 
Last edited:
Sep 10, 2022
113
17
95
The biggest problems for Nvidia was GDDR6X temps, not GPU temps. Partner cards were way better than Founders Edition for the most part (though some AIB cards sucked even worse, true). I replaced thermal pads on a Gigabyte Vision 3080 and the 3080 FE, both dropped VRAM temps 10-20C. I rarely see GPU temp on 3080/3090 go above 80C, and often closer to 70C.

Now, if you fire up Ethereum mining (back when that was a thing), Nvidia GDDR6X temps shot up to 110C on many cards, definitely on the Founders Editions. I even tried replacing thermal pads on the 3080 Ti FE VRAM, and while it helped a bit, clearly that cooler design wasn't keeping up with 12 GDDR6X chips pumping out the heat.

In fact, GPU temps did go up on all the cards where I replaced thermal pads on the VRAM. That makes sense, as suddenly the VRAM was able to dump more heat into the cooler and that affected the GPU. But the GPU temps were still in the safe range.

Man you explained well but mostly don’t get this explanation when asking for help 😂 Most of them gets advised to undervolt only without anyone telling them it is what it is
 
  • Like
Reactions: JarredWaltonGPU

KyaraM

Admirable
Mar 11, 2022
1,478
645
6,790
That's your experience, as unfortunate as it was(sorry to hear that), it doesn't make or mean the entire brand is bad for everyone else, presently or later on.
What about those currently using Radeon without a hitch? Some have bad experiences with Nvidia too, though underrepresented.
There can be heat issues with models from both sides, so that one is a bit lost on me.

Will you still lock yourself to one option(Nvidia) if Intel has those same issues with future generations?
I didn't say all of their cards are the same and that they don't work for anyone. They would be out of business for a long time by now if that was the case. Still, I feel that I am free to avoid products I have bad experiences with. What you do is your thing. To answer your question, I would do the same with Intel as I did with AMD; three strikes and out.

The biggest problems for Nvidia was GDDR6X temps, not GPU temps. Partner cards were way better than Founders Edition for the most part (though some AIB cards sucked even worse, true). I replaced thermal pads on a Gigabyte Vision 3080 and the 3080 FE, both dropped VRAM temps 10-20C. I rarely see GPU temp on 3080/3090 go above 80C, and often closer to 70C.

Now, if you fire up Ethereum mining (back when that was a thing), Nvidia GDDR6X temps shot up to 110C on many cards, definitely on the Founders Editions. I even tried replacing thermal pads on the 3080 Ti FE VRAM, and while it helped a bit, clearly that cooler design wasn't keeping up with 12 GDDR6X chips pumping out the heat.

In fact, GPU temps did go up on all the cards where I replaced thermal pads on the VRAM. That makes sense, as suddenly the VRAM was able to dump more heat into the cooler and that affected the GPU. But the GPU temps were still in the safe range.
That is quite interesting, my 3070Ti, also with GDDR6X VRAM, runs at around 60-65°C and 70-75°C hotspot, with VRAM temps basically perfectly in the middle of them. No pad exchange required. Highest I have seen was 75°C on average on a very hot summer day I think. Still pretty save. I'm not sure what the issue with bigger cards is exactly, if the issue lies with early models or a general construction error. I don't have a FE card, though, so that might point at the FE being not so well constructed, or at least the pad being bad?

Meanwhile, all AMD cards I had, and all tests I have seen, ran higher. Worst was the card I had that ran to 120°C and then shut down the PC - in GW2, which really isn't all that graphically demanding and never was.
 

bretbernhoft

Prominent
Jan 18, 2022
9
0
510
wherebret.com
My first "real" GPU was an EVGA product; the 1050. And I've been buying their Graphics Cards ever since. So it's a real shame that this company has ended their famously durable GPUs, for whatever the reason is. Oh well, the marketplace will crown a new champion manufacturer. Who that will be, only time can tell.
 

JarredWaltonGPU

Senior GPU Editor
Editor
That is quite interesting, my 3070Ti, also with GDDR6X VRAM, runs at around 60-65°C and 70-75°C hotspot, with VRAM temps basically perfectly in the middle of them. No pad exchange required. Highest I have seen was 75°C on average on a very hot summer day I think. Still pretty save. I'm not sure what the issue with bigger cards is exactly, if the issue lies with early models or a general construction error. I don't have a FE card, though, so that might point at the FE being not so well constructed, or at least the pad being bad?

Meanwhile, all AMD cards I had, and all tests I have seen, ran higher. Worst was the card I had that ran to 120°C and then shut down the PC - in GW2, which really isn't all that graphically demanding and never was.
The Founders Edition 3070 Ti without any mods got GDDR6X temps well above 100C if you tried mining. I think even FurMark got it to 100C, and that was only with eight chips. Better cards did not get as hot, obviously, but the FE designs were not awesome IMO.

As for AMD, if you never tried the RDNA or RDNA 2 cards, I can definitely corroborate your experience. Polaris and Vega ran hot, so did the earlier "Southern Islands" and "Northern Islands" architectures. RDNA 2 is quite nice by comparison, and even the reference designs with GDDR6 don't get particularly loud or hot in my experience.