News GDDR6X in 3080 and 3090 Hits 110C While Mining Ethereum

Oct 22, 2020
10
7
15
"We also don't know exactly what "GDDR6X Memory Junction Temperature" means "

Well, article writer might want do some research on that one.

And any miner worth their salt keeps temps MUCH lower than this... Ive got some 3090 cards mining, and if your memory temps are anywhere near that high you are doing it wrong. Can't mine on stock settings and expect to get the best results. Can easily hit 120 hashrate on 3090 without compromising temps.
 

JarredWaltonGPU

Senior GPU Editor
Editor
"We also don't know exactly what "GDDR6X Memory Junction Temperature" means "

Well, article writer might want do some research on that one.

And any miner worth their salt keeps temps MUCH lower than this... Ive got some 3090 cards mining, and if your memory temps are anywhere near that high you are doing it wrong. Can't mine on stock settings and expect to get the best results. Can easily hit 120 hashrate on 3090 without compromising temps.
I'm gonna call bunk on this one. I did some testing, playing with clocks, fan speeds, and power limits. Unless you're modding the cards to improve cooling, or voltage modding, I'm not seeing how you get significantly higher performance than what I measured (100 MH/s on 3090) while keeping GDDR6X temperatures below 100C, never mind 110C. What's even better is the claim that "any miner worth their salt keeps temps MUCH lower than this." Really? And how do you know what temperature your GDDR6X memory is running at? Since the other GPU monitoring utilities don't report GDDR6X thermals AFAIK. MSI Afterburner doesn't report GDDR6X temps, neither does EVGA Precision X1. So what utility were you using, prior to the latest HWiNFO64, that supported GDDR6X temp reporting? For the record, the GPU was running at around 60C with the results I measured, so GDDR6X at 110C, GPU at 60C.

As for the first part, put the whole quote in: 'We also don't know exactly what "GDDR6X Memory Junction Temperature" means, as far as the other GDDR6X chips are concerned.' We're saying we don't know if what HWiNFO64 says actually equates to what Nvidia and Micron would say, or what temperatures the other nine (3080) or 23 (3090) chips are running at. I've been around long enough to know that software can and often does differ in how sensors are interpreted. You would think that if Micron says the GDDR6X chips are rated for up to 95C TjMax, that the cards would throttle at 95C and not 110C. So perhaps HWiNFO64 doesn't know about an offset from the sensors, or there's something else at play. (Just one example: Ryzen 7 1700 as an example had a 15C offset that it reported to the fan connector, so the mobo would think the CPU was at 85C when in fact it was at 70C.)

What I can say, unequivocally, is that at stock the exteriors of all the RTX 3080 and 3090 cards I've tested can get very hot to the touch -- 70C or so. It physically hurts if you put your hand on a card that's running at that temperature. Hold it there long enough and it would burn you. The backs of the cards are often much hotter than the fronts where the fans are. Aiming a large fan at the cards would help, and if I were planning on doing 24/7 mining with any of these cards, I'd plan on some serious external airflow at the very least.

If I'm incorrect, I'd love to know how. Please prove me wrong by posting a video showing your mining software running at 120MH/s with GDDR6X temperatures "MUCH lower than this" and let us know the secret. Until then, I'll just toss this out there. RTX 3090 FE, sitting next to an open window (outside is maybe 40-45F right now), minimum clocks set on GPU and RAM, fan speed at 65 percent. The GPU core is running at around 1350MHz and 53C. The GDDR6X is running at 18Gbps and 108C "GPU Memory Junction Temperature." I've had this running for an hour or so doing testing, and I'm done with that now. I'll take that huge $0.77 in earnings (which is actually me testing mining and GDDR6X temperatures on multiple PCs with different GPUs over the past few hours), and call it a day. Because running consumer focused gaming GPUs like this for months at a time doesn't seem like a safe bet. "Look, I made $500 off of Ethereum mining over a two month period ... and then burned out my $1500 GPU that you can't even find in stock!" Thanks but no thanks.
79
 

jpe1701

Honorable
I'm gonna call bunk on this one. I did some testing, playing with clocks, fan speeds, and power limits. Unless you're modding the cards to improve cooling, or voltage modding, I'm not seeing how you get significantly higher performance than what I measured (100 MH/s on 3090) while keeping GDDR6X temperatures below 100C, never mind 110C. What's even better is the claim that "any miner worth their salt keeps temps MUCH lower than this." Really? And how do you know what temperature your GDDR6X memory is running at? Since the other GPU monitoring utilities don't report GDDR6X thermals AFAIK. MSI Afterburner doesn't report GDDR6X temps, neither does EVGA Precision X1. So what utility were you using, prior to the latest HWiNFO64, that supported GDDR6X temp reporting? For the record, the GPU was running at around 60C with the results I measured, so GDDR6X at 110C, GPU at 60C.

As for the first part, put the whole quote in: 'We also don't know exactly what "GDDR6X Memory Junction Temperature" means, as far as the other GDDR6X chips are concerned.' We're saying we don't know if what HWiNFO64 says actually equates to what Nvidia and Micron would say, or what temperatures the other nine (3080) or 23 (3090) chips are running at. I've been around long enough to know that software can and often does differ in how sensors are interpreted. You would think that if Micron says the GDDR6X chips are rated for up to 95C TjMax, that the cards would throttle at 95C and not 110C. So perhaps HWiNFO64 doesn't know about an offset from the sensors, or there's something else at play. (Just one example: Ryzen 7 1700 as an example had a 15C offset that it reported to the fan connector, so the mobo would think the CPU was at 85C when in fact it was at 70C.)

What I can say, unequivocally, is that at stock the exteriors of all the RTX 3080 and 3090 cards I've tested can get very hot to the touch -- 70C or so. It physically hurts if you put your hand on a card that's running at that temperature. Hold it there long enough and it would burn you. The backs of the cards are often much hotter than the fronts where the fans are. Aiming a large fan at the cards would help, and if I were planning on doing 24/7 mining with any of these cards, I'd plan on some serious external airflow at the very least.

If I'm incorrect, I'd love to know how. Please prove me wrong by posting a video showing your mining software running at 120MH/s with GDDR6X temperatures "MUCH lower than this" and let us know the secret. Until then, I'll just toss this out there. RTX 3090 FE, sitting next to an open window (outside is maybe 40-45F right now), minimum clocks set on GPU and RAM, fan speed at 65 percent. The GPU core is running at around 1350MHz and 53C. The GDDR6X is running at 18Gbps and 108C "GPU Memory Junction Temperature." I've had this running for an hour or so doing testing, and I'm done with that now. I'll take that huge $0.77 in earnings (which is actually me testing mining and GDDR6X temperatures on multiple PCs with different GPUs over the past few hours), and call it a day. Because running consumer focused gaming GPUs like this for months at a time doesn't seem like a safe bet. "Look, I made $500 off of Ethereum mining over a two month period ... and then burned out my $1500 GPU that you can't even find in stock!" Thanks but no thanks.
View attachment 79
Jarred it is good to see a Toms hardware writer/editor in the comments. Thank you for being a member of the community. If it is anything like gddr6 the junction temp is the internal temp and actual surface temp is around 14c lower, at least that is how it is for the gddr6 on the 5700xt. I was unfortunate enough to have bought the asus tuf rx 5700xt first revision that had a tiny piece of metal for the memory heat sink and it wasn't attached to the main heat sink and I had to send it back to asus twice before they fixed it so that the memory wouldn't shoot up to 105c after only a few minutes. Now it peaks at 94 according to hwinfo64.
 
Jan 27, 2021
3
0
10
I'm gonna call bunk on this one. I did some testing, playing with clocks, fan speeds, and power limits. Unless you're modding the cards to improve cooling, or voltage modding, I'm not seeing how you get significantly higher performance than what I measured (100 MH/s on 3090) while keeping GDDR6X temperatures below 100C, never mind 110C. What's even better is the claim that "any miner worth their salt keeps temps MUCH lower than this." Really? And how do you know what temperature your GDDR6X memory is running at? Since the other GPU monitoring utilities don't report GDDR6X thermals AFAIK. MSI Afterburner doesn't report GDDR6X temps, neither does EVGA Precision X1. So what utility were you using, prior to the latest HWiNFO64, that supported GDDR6X temp reporting? For the record, the GPU was running at around 60C with the results I measured, so GDDR6X at 110C, GPU at 60C.

As for the first part, put the whole quote in: 'We also don't know exactly what "GDDR6X Memory Junction Temperature" means, as far as the other GDDR6X chips are concerned.' We're saying we don't know if what HWiNFO64 says actually equates to what Nvidia and Micron would say, or what temperatures the other nine (3080) or 23 (3090) chips are running at. I've been around long enough to know that software can and often does differ in how sensors are interpreted. You would think that if Micron says the GDDR6X chips are rated for up to 95C TjMax, that the cards would throttle at 95C and not 110C. So perhaps HWiNFO64 doesn't know about an offset from the sensors, or there's something else at play. (Just one example: Ryzen 7 1700 as an example had a 15C offset that it reported to the fan connector, so the mobo would think the CPU was at 85C when in fact it was at 70C.)

What I can say, unequivocally, is that at stock the exteriors of all the RTX 3080 and 3090 cards I've tested can get very hot to the touch -- 70C or so. It physically hurts if you put your hand on a card that's running at that temperature. Hold it there long enough and it would burn you. The backs of the cards are often much hotter than the fronts where the fans are. Aiming a large fan at the cards would help, and if I were planning on doing 24/7 mining with any of these cards, I'd plan on some serious external airflow at the very least.

If I'm incorrect, I'd love to know how. Please prove me wrong by posting a video showing your mining software running at 120MH/s with GDDR6X temperatures "MUCH lower than this" and let us know the secret. Until then, I'll just toss this out there. RTX 3090 FE, sitting next to an open window (outside is maybe 40-45F right now), minimum clocks set on GPU and RAM, fan speed at 65 percent. The GPU core is running at around 1350MHz and 53C. The GDDR6X is running at 18Gbps and 108C "GPU Memory Junction Temperature." I've had this running for an hour or so doing testing, and I'm done with that now. I'll take that huge $0.77 in earnings (which is actually me testing mining and GDDR6X temperatures on multiple PCs with different GPUs over the past few hours), and call it a day. Because running consumer focused gaming GPUs like this for months at a time doesn't seem like a safe bet. "Look, I made $500 off of Ethereum mining over a two month period ... and then burned out my $1500 GPU that you can't even find in stock!" Thanks but no thanks.
View attachment 79
I registered to add this comment.

NVIDIA, of the two main GPU designers, is historically less utilized than AMD for mining in general. They likely simply overlooked the capability of these cards to get to high GDDR6X temps in mining due to the fact that they simply don't want their mainline cards used for that purpose. Utilizing the cards memory for high-intensity mining on the memory will raise memory temps past what you would see in most cases, even in server applications.

The little line in the article assumes NVIDIA simply lets the temperature of the memory be pushed to that point without hesitation. What is more likely is that the memory is stressed beyond what you would see in any case of usual usage, to the point where throttling the GPU does nothing to mitigate it. If utilizing these cards for mining, something NVIDIA has not announced any support for outside the Crypto SKUs, then it is an unsupported application of the hardware.

Beyond that, the VBIOS is usually prepped by the developer then sent to the board partners for modifications, or made from the ground up by those same partners. The issue would fall on them, then, for not optimizing their cards for mining, not NVIDIA.

If you use these cards for unsupported applications, it is abundantly clear that these applications might lead to thermal runaway or otherwise. What is most likely here is that the memory pads are insufficient for mining use, so need to be modified for their unsupported use. This is a non-issue.
 
Jan 27, 2021
1
0
10
"We also don't know exactly what "GDDR6X Memory Junction Temperature" means "

Well, article writer might want do some research on that one.

And any miner worth their salt keeps temps MUCH lower than this... Ive got some 3090 cards mining, and if your memory temps are anywhere near that high you are doing it wrong. Can't mine on stock settings and expect to get the best results. Can easily hit 120 hashrate on 3090 without compromising temps.
Gimme a break. On Reddit, NO ONE reported temps below the Memory Junction Temperature on stock cooler, no matter their mining settings.
 

Phaaze88

Titan
Ambassador
No one really noticed the Vram thermals unless:
-they had something like EVGA's FTW3; they can be monitored via Precision X.
-they used thermocouples.

It'll be interesting to see/hear what happens to 2nd hand buyers of this gen of mining gpus.
 

btmedic04

Distinguished
Mar 12, 2015
472
361
19,190
I'm gonna call bunk on this one. I did some testing, playing with clocks, fan speeds, and power limits. Unless you're modding the cards to improve cooling, or voltage modding, I'm not seeing how you get significantly higher performance than what I measured (100 MH/s on 3090) while keeping GDDR6X temperatures below 100C, never mind 110C. What's even better is the claim that "any miner worth their salt keeps temps MUCH lower than this." Really? And how do you know what temperature your GDDR6X memory is running at? Since the other GPU monitoring utilities don't report GDDR6X thermals AFAIK. MSI Afterburner doesn't report GDDR6X temps, neither does EVGA Precision X1. So what utility were you using, prior to the latest HWiNFO64, that supported GDDR6X temp reporting? For the record, the GPU was running at around 60C with the results I measured, so GDDR6X at 110C, GPU at 60C.

As for the first part, put the whole quote in: 'We also don't know exactly what "GDDR6X Memory Junction Temperature" means, as far as the other GDDR6X chips are concerned.' We're saying we don't know if what HWiNFO64 says actually equates to what Nvidia and Micron would say, or what temperatures the other nine (3080) or 23 (3090) chips are running at. I've been around long enough to know that software can and often does differ in how sensors are interpreted. You would think that if Micron says the GDDR6X chips are rated for up to 95C TjMax, that the cards would throttle at 95C and not 110C. So perhaps HWiNFO64 doesn't know about an offset from the sensors, or there's something else at play. (Just one example: Ryzen 7 1700 as an example had a 15C offset that it reported to the fan connector, so the mobo would think the CPU was at 85C when in fact it was at 70C.)

What I can say, unequivocally, is that at stock the exteriors of all the RTX 3080 and 3090 cards I've tested can get very hot to the touch -- 70C or so. It physically hurts if you put your hand on a card that's running at that temperature. Hold it there long enough and it would burn you. The backs of the cards are often much hotter than the fronts where the fans are. Aiming a large fan at the cards would help, and if I were planning on doing 24/7 mining with any of these cards, I'd plan on some serious external airflow at the very least.

If I'm incorrect, I'd love to know how. Please prove me wrong by posting a video showing your mining software running at 120MH/s with GDDR6X temperatures "MUCH lower than this" and let us know the secret. Until then, I'll just toss this out there. RTX 3090 FE, sitting next to an open window (outside is maybe 40-45F right now), minimum clocks set on GPU and RAM, fan speed at 65 percent. The GPU core is running at around 1350MHz and 53C. The GDDR6X is running at 18Gbps and 108C "GPU Memory Junction Temperature." I've had this running for an hour or so doing testing, and I'm done with that now. I'll take that huge $0.77 in earnings (which is actually me testing mining and GDDR6X temperatures on multiple PCs with different GPUs over the past few hours), and call it a day. Because running consumer focused gaming GPUs like this for months at a time doesn't seem like a safe bet. "Look, I made $500 off of Ethereum mining over a two month period ... and then burned out my $1500 GPU that you can't even find in stock!" Thanks but no thanks.
View attachment 79


Not all of us 3090 owners are mining due to the current craze. I wanted a 3080 originally but after 2-3 months of dealing with bots and with scalper prices approaching retail 3090 levels, i decided to jump on an evga rtx 3090 ftw3 that became available at my local microcenter. I mine in my off time to recoup some of the cost. I run my card at 70% power limit and force my memory to run at stock clocks at 9750 with precision x1 and am seeing on average 109 MH/s in nicehash. The thermocouples that come as part of ICX3 show memory temps of 71c, 72c and 75c whereas hwinfo is showing 100c average with a peak of 102c. Im more apt to trust the thermocouples right now than hwinfo64 due to new part reading methods generally being inaccurate the first couple software iterations. I feel that an offset is incorrect in hwinfo based off of the fact that even in gaming, gddr6x temps are reported at 100c .

In the meantime, ill keep chipping away at the original cost of my 3090 at about $8 a day unless you confirm with nvidia that gddr6x temps are run at 100c and even then i may continue as i also purchased evgas extended warranty and advanced overnight rma coverage
 
  • Like
Reactions: Le_Fourbe

junglist724

Honorable
Apr 10, 2017
126
38
10,640
The thermocouples that come as part of ICX3 show memory temps of 71c, 72c and 75c whereas hwinfo is showing 100c average with a peak of 102c. Im more apt to trust the thermocouples right now than hwinfo64 due to new part reading methods generally being inaccurate the first couple software iterations. I feel that an offset is incorrect in hwinfo based off of the fact that even in gaming, gddr6x temps are reported at 100c .
There's 22 memory modules and only 3 ICX sensors for the memory. Based on the pic that Precision X1 shows when viewing the ICX sensors, none of them are on the back of the card and they certainly aren't going to be as accurate as the built in GDDR6X temp sensors since they can only be mounted near the modules, not inside them. HWinfo shows the highest temp from any of the modules and the ones on the rear are going to be significantly hotter with the nearby gpu die, vrm, and the memory modules directly on the opposite side of the PCB all dumping heat into them. Plus EVGA's backplates are only like 1mm thick.

Igor's lab already measured temps as high as 84.1C directly behind one of the memory modules on a 3080 FE PCB and that doesn't even have rear memory modules. On my 3090 FTW3 with an Optimus waterblock the ICX sensors show 49-51C while HWinfo shows 78C. In games my 3090 FE hits 86C memory junction temps on an open air test bench with a 2000 RPM NF-A12x25 pointed directly at the backplate and it stays pinned at 110C when mining at max power limit.
 

zodiacfml

Distinguished
Oct 2, 2008
1,228
26
19,310
well, if you have spent an hour or less to research how miners are doing fine with this issue then you would have a more useful and longer article. but i guess your article works discouraging GPU mining though.
 
  • Like
Reactions: Clarence_Darrow

Koen1982

Commendable
Sep 29, 2020
6
1
1,515
I'm gonna call bunk on this one. I did some testing, playing with clocks, fan speeds, and power limits. Unless you're modding the cards to improve cooling, or voltage modding, I'm not seeing how you get significantly higher performance than what I measured (100 MH/s on 3090) while keeping GDDR6X temperatures below 100C, never mind 110C. What's even better is the claim that "any miner worth their salt keeps temps MUCH lower than this." Really? And how do you know what temperature your GDDR6X memory is running at? Since the other GPU monitoring utilities don't report GDDR6X thermals AFAIK. MSI Afterburner doesn't report GDDR6X temps, neither does EVGA Precision X1. So what utility were you using, prior to the latest HWiNFO64, that supported GDDR6X temp reporting? For the record, the GPU was running at around 60C with the results I measured, so GDDR6X at 110C, GPU at 60C.

As for the first part, put the whole quote in: 'We also don't know exactly what "GDDR6X Memory Junction Temperature" means, as far as the other GDDR6X chips are concerned.' We're saying we don't know if what HWiNFO64 says actually equates to what Nvidia and Micron would say, or what temperatures the other nine (3080) or 23 (3090) chips are running at. I've been around long enough to know that software can and often does differ in how sensors are interpreted. You would think that if Micron says the GDDR6X chips are rated for up to 95C TjMax, that the cards would throttle at 95C and not 110C. So perhaps HWiNFO64 doesn't know about an offset from the sensors, or there's something else at play. (Just one example: Ryzen 7 1700 as an example had a 15C offset that it reported to the fan connector, so the mobo would think the CPU was at 85C when in fact it was at 70C.)

What I can say, unequivocally, is that at stock the exteriors of all the RTX 3080 and 3090 cards I've tested can get very hot to the touch -- 70C or so. It physically hurts if you put your hand on a card that's running at that temperature. Hold it there long enough and it would burn you. The backs of the cards are often much hotter than the fronts where the fans are. Aiming a large fan at the cards would help, and if I were planning on doing 24/7 mining with any of these cards, I'd plan on some serious external airflow at the very least.

If I'm incorrect, I'd love to know how. Please prove me wrong by posting a video showing your mining software running at 120MH/s with GDDR6X temperatures "MUCH lower than this" and let us know the secret. Until then, I'll just toss this out there. RTX 3090 FE, sitting next to an open window (outside is maybe 40-45F right now), minimum clocks set on GPU and RAM, fan speed at 65 percent. The GPU core is running at around 1350MHz and 53C. The GDDR6X is running at 18Gbps and 108C "GPU Memory Junction Temperature." I've had this running for an hour or so doing testing, and I'm done with that now. I'll take that huge $0.77 in earnings (which is actually me testing mining and GDDR6X temperatures on multiple PCs with different GPUs over the past few hours), and call it a day. Because running consumer focused gaming GPUs like this for months at a time doesn't seem like a safe bet. "Look, I made $500 off of Ethereum mining over a two month period ... and then burned out my $1500 GPU that you can't even find in stock!" Thanks but no thanks.
View attachment 79
https://media-www.micron.com/-/medi...rief.pdf?rev=b65e2075b1f2437ab5fa0d4c06e6aa6f

-> To be short on this, micron never defines a junction temperature in their datasheet, only a case temperature(Tc, this is also the case for ddr3,ddr4). If you stay within this limit it's ok (ofcourse the cooler it is the better). juction to case and power are also missing from the pdf that is available for everyone so is the thing you are writing an article about even a problem ??
 

VforV

Respectable
BANNED
Oct 9, 2019
578
287
2,270
well, if you have spent an hour or less to research how miners are doing fine with this issue then you would have a more useful and longer article. but i guess your article works discouraging GPU mining though.
Yeah, doing fine? Now?
How about someone that was considering buying a used 3080 after this mining craze ? one year from now...

Thanks to this article I won't even consider buying a card used for mining now (or better not bying a used 3080, period). Unless they get so cheap it's worth losing the card and money when they fail...

Between shortages (intended and not), individual scalper scums and company scalpers (AIBs, which are the same) and the gamer fools that buy at those insane prices, and the abhorrent miners, this whole continuing situation makes me vomit. Bleah.

By the looks of things, I guess I'll feel like this at least an entire year...
 
Last edited:
D

Deleted member 2851593

Guest
I declare a minute of silence for the poor souls who in the coming months will buy those cards used from miners at a price higher than MSRP to see them artifact to death within a week.
 
Jan 28, 2021
1
0
10
I had already added this HSF to my Zotac 3090 knowing it was near the thermal limit while mining. It dropped the VRAM temp from 104C to 92C when I tested it.

I know no-one knows for sure at this point, but any consensus on whether this is relatively safe at 92C?

View: https://imgur.com/lU2nsaD

That's seems effective :) ! definitly appealing ! I have the exact same card and it run at 105°C at 80% but the fan speed is only 90% (set to default so not throtling until 110 as stated). I would consider that solution, thx for the post ! what is your components ? Do you had issue with the thermal paste application ? can it be removed without risks for RMA ? I don't have more than 1.2cm space (cpu air cooler). Can i move the GPU to the 2dn slot without compromise on the X570 (bandwith important ?) ?
since i have 2 spare 120mm fan and a GPU holder it will be a nice use !

it would be nice of you if i get even a short reply :) for the main part. I just started mining 3 weeks ago and i don't want to ruin prematurely this 1760€ gaming card.
 
Oct 22, 2020
10
7
15
I appreciate the editor taking the time to give a thorough and thoughtful response. There are more direct ways of measuring temperature than relying on software (as was correctly pointed out - such readings are often wildly inaccurate). There are many creative ways of lowering temps. The majority of what I have learned about GPU mining has come from trial and error.

The VRAM gets nice and toasty, it is imperative that you have good airflow around the entire card - and it helps if you have a cooling solution involving very low ambient temperatures. Airflow/exhaust planning is crucial for a mining operation. Simply adding fans doesn't help if you don't think carefully about the airflow itself and exhausting the hot air. Engineering a negative air pressure environment and funneling in cold air pays dividends, literally. For the big cards, I recommend adding your own thermal pads if other cooling methods are unavailable or impractical.
 

UnCertainty08

Reputable
Sep 9, 2020
40
9
4,545
I'm gonna call bunk on this one. I did some testing, playing with clocks, fan speeds, and power limits. Unless you're modding the cards to improve cooling, or voltage modding, I'm not seeing how you get significantly higher performance than what I measured (100 MH/s on 3090) while keeping GDDR6X temperatures below 100C, never mind 110C. What's even better is the claim that "any miner worth their salt keeps temps MUCH lower than this." Really? And how do you know what temperature your GDDR6X memory is running at? Since the other GPU monitoring utilities don't report GDDR6X thermals AFAIK. MSI Afterburner doesn't report GDDR6X temps, neither does EVGA Precision X1. So what utility were you using, prior to the latest HWiNFO64, that supported GDDR6X temp reporting? For the record, the GPU was running at around 60C with the results I measured, so GDDR6X at 110C, GPU at 60C.

As for the first part, put the whole quote in: 'We also don't know exactly what "GDDR6X Memory Junction Temperature" means, as far as the other GDDR6X chips are concerned.' We're saying we don't know if what HWiNFO64 says actually equates to what Nvidia and Micron would say, or what temperatures the other nine (3080) or 23 (3090) chips are running at. I've been around long enough to know that software can and often does differ in how sensors are interpreted. You would think that if Micron says the GDDR6X chips are rated for up to 95C TjMax, that the cards would throttle at 95C and not 110C. So perhaps HWiNFO64 doesn't know about an offset from the sensors, or there's something else at play. (Just one example: Ryzen 7 1700 as an example had a 15C offset that it reported to the fan connector, so the mobo would think the CPU was at 85C when in fact it was at 70C.)

What I can say, unequivocally, is that at stock the exteriors of all the RTX 3080 and 3090 cards I've tested can get very hot to the touch -- 70C or so. It physically hurts if you put your hand on a card that's running at that temperature. Hold it there long enough and it would burn you. The backs of the cards are often much hotter than the fronts where the fans are. Aiming a large fan at the cards would help, and if I were planning on doing 24/7 mining with any of these cards, I'd plan on some serious external airflow at the very least.

If I'm incorrect, I'd love to know how. Please prove me wrong by posting a video showing your mining software running at 120MH/s with GDDR6X temperatures "MUCH lower than this" and let us know the secret. Until then, I'll just toss this out there. RTX 3090 FE, sitting next to an open window (outside is maybe 40-45F right now), minimum clocks set on GPU and RAM, fan speed at 65 percent. The GPU core is running at around 1350MHz and 53C. The GDDR6X is running at 18Gbps and 108C "GPU Memory Junction Temperature." I've had this running for an hour or so doing testing, and I'm done with that now. I'll take that huge $0.77 in earnings (which is actually me testing mining and GDDR6X temperatures on multiple PCs with different GPUs over the past few hours), and call it a day. Because running consumer focused gaming GPUs like this for months at a time doesn't seem like a safe bet. "Look, I made $500 off of Ethereum mining over a two month period ... and then burned out my $1500 GPU that you can't even find in stock!" Thanks but no thanks.
View attachment 79
Long time gamer first time miner with an EVGA 3090 FTW3 Ultra. I get ~120MH ether with GPU <50°C ICX Mem <60° C and just found out today after reading this article and updating HWinfo that my Mem is >80°C.
Not trying to brag or argue just reporting what I have for the common good and the article.

Also that's with +129 core and +1294 mem with boost lock on.

***I'm not able to provide a pic, imgur won't let me register for some weird reason, don't normally use it. I don't use social media\spyware, even so I installed instagram on my phone to try and provide url for including a pic but it's not working out.
I'd be happy to send a pic directly or through the forums somehow or post a pic if you could help with it any way. I don't want to come across as a troll, things are just acting weird and I can't get a screen shot in my post.
 
Last edited:

Phaaze88

Titan
Ambassador
What about the vram temp on 3070 cards? They use nonX GDDR6 if memory serves.
You can just refer to the 20 series and how they held up.

***I'm not able to provide a pic, imgur won't let me register for some weird reason, don't normally use it. I don't use social media\spyware, even so I installed instagram on my phone to try and provide url for including a pic but it's not working out.
I'd be happy to send a pic directly or through the forums somehow or post a pic if you could help with it any way. I don't want to come across as a troll, things are just acting weird and I can't get a screen shot in my post.
Postimages.org
Imgbb.com