[SOLVED] Old PC crashes with new GPU during stress test

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

sassmouth

Distinguished
Dec 20, 2010
31
0
18,530
Hi, I just installed a 3080 in a dell inspiron 7610. It seems to be working, having installed latest drivers.

But when I ran a GPU stress test using FurMark, the machine shut off immediately. I previously tried a test using GPU UserBenchmark, and the same thing happened, tho not immediately.

I updated BIOS and tried GPU UserBenchmark again, and after about 3/4 through test it crashed again, just shut off and restarted.

I don't think the GPU was even running intensely -- didn't hear fans, and it was within seconds of the tests starting...

Though the GPU is warm, the fans don't appear to be running very much at all.

Any ideas if this is workable, or maybe this is just too much card for this old PC? The machine has 64 gb ram and 2 Xeon processors.

Thanks!

https://www.anrdoezrs.net/click-8900246-14442528?url=https://i.dell.com/sites/csdocuments/Business_smb_merchandizing_Documents/en/us/Dell_Precision_T7610_Spec_Sheet.pdf&sid=tomshardware-us-6606105470437783000
 
Last edited:
Solution
Do you think there's any way to troubleshoot this? If not I think I may have to return the GPU. Thanks!
In a normal system, the simplest troubleshooting step would be to try another PSU of appropriate rating and quality, though this would be an expensive one when you need ~1000W.. Since it is a Dell though, this may not be an option due to likely proprietary PSU. Jerry-rigging a 2nd PSU just for the GPU might be an option.

Since it is an Ivy Bridge era PSU in there, it may not have been designed to cope with Haswell-era transients to begin with. Add eight years of wear on top, that gives it little chance of getting along with modern high-end GPUs' almost ATX 3.0-class transients.

sassmouth

Distinguished
Dec 20, 2010
31
0
18,530
On a 1300W PSU, I'd expect at least 80A total on 12V. If it is OCP'd in 18A chunks, that would be at least five circuits..

Thanks again for fielding these questions. Here's link to an image of this PSU's power summary in a youtube video where they swap out this same PSU, and it shows the amperage divided over (are these the rails?) A-J at 18A per. (Tried to upload a screengrab, but it prompted me for a link).

Should I still avoid trying, say, a 3060-ti which is 200W vs the 3080's 320W? Am guessing it's moot because of the probable inability to handle the rapid load changes you mentioned? Again, thanks for your insight!
 

sassmouth

Distinguished
Dec 20, 2010
31
0
18,530
It should if nothing else on the rail. You mentioned that you are using adapters to go 6 to 8pin are they dual 6 to single 8 or single to single?
How many PCIe connectors are there available?

I think I have 3 PCIe connectors available -- 2 are 6s and 1 is an 8, which has a a 1-8 to 2-6 adapter on it -- but I'm not using that with the 3080. I have the 3080 connected via the 2 6-pin connector, via a 6-to-8 pin connecting to each of the two ports on the card...
 
Last edited:

sassmouth

Distinguished
Dec 20, 2010
31
0
18,530
It should if nothing else on the rail. You mentioned that you are using adapters to go 6 to 8pin are they dual 6 to single 8 or single to single?
How many PCIe connectors are there available?

Should I try using the single 8-pin connector, split to 2 6s, then connected to the card via the 2 6-to-8 adapters? is the goal to use only 1 connector back to PSU, or two spread the load to 2?
 

sassmouth

Distinguished
Dec 20, 2010
31
0
18,530
Spread the load, connect the 8 and use a 2 x 6 to the other 8. Chances are that you are using 3 rails then.

Cool, will try that. But should I do that with the 3080 that I have, and try the stress test again? (Am gun shy after two power failures.) Or should I just box up this 3080 and its 320W and try again with the smaller 3060-ti? Thanks again!
 

sassmouth

Distinguished
Dec 20, 2010
31
0
18,530
I would try with the 3080
I would try with the 3080

Holy Rolli, it worked! I detached one of the 6-to-8 adapters and plugged the single 8-pin cord in its place, next to the other 6-to-8 and it ran smoothly! In the earlier test it didn't even get to the GPU portion of the test. It seems that 2nd cord with the 6-8 adapter, or the adapter itself, was the culprit?!

Results say the card is crazy fast, though still it's running a bit slower than it could, but maybe that's my old PSU/CPUs. Either way it's plenty more than I need, and I can use this card with a new machine down the road if/when I upgrade.

I'm just amazed it worked. Thanks again for the help!

UserBenchmarks: Game 133%, Desk 75%, Work 165%
CPU: 1st CPU: Intel Xeon E5-2680 v2 - 71.3%
GPU: Nvidia RTX 3080 - 188.3%
SSD: Intel SSDSC2BB48 480GB - 51.5%
HDD: WD Re 2TB - 48.3%
RAM: Nanya NT8GC72C4NB3NK-CG 8x8GB - 84.2%
MBD: Dell Precision T7610
 
Holy Rolli, it worked! I detached one of the 6-to-8 adapters and plugged the single 8-pin cord in its place, next to the other 6-to-8 and it ran smoothly! In the earlier test it didn't even get to the GPU portion of the test. It seems that 2nd cord with the 6-8 adapter, or the adapter itself, was the culprit?!

Results say the card is crazy fast, though still it's running a bit slower than it could, but maybe that's my old PSU/CPUs. Either way it's plenty more than I need, and I can use this card with a new machine down the road if/when I upgrade.

I'm just amazed it worked. Thanks again for the help!

UserBenchmarks: Game 133%, Desk 75%, Work 165%
CPU: 1st CPU: Intel Xeon E5-2680 v2 - 71.3%
GPU: Nvidia RTX 3080 - 188.3%
SSD: Intel SSDSC2BB48 480GB - 51.5%
HDD: WD Re 2TB - 48.3%
RAM: Nanya NT8GC72C4NB3NK-CG 8x8GB - 84.2%
MBD: Dell Precision T7610
Great, Dell server PSU's normally are good quality and apparently you had an already occupied rail attached previously or both 6 pin connectors are on the same rail.
 
Last edited:
  • Like
Reactions: sassmouth

InvalidError

Titan
Moderator
Great, Dell server PSU's normally are good quality and apparently you had an already occupied rail attached previously or both 6 pin connectors are on the same rail.
It is still eight years old and possibly of a design that precedes Haswell. Hard to tell how well its over-engineering for 2013-2014 loads will endure the beating from 2020 components that prompted the new ATX 3.0 power design spec to supersede the previous major update it got to accommodate Haswell.
 
  • Like
Reactions: sassmouth
It is still eight years old and possibly of a design that precedes Haswell. Hard to tell how well its over-engineering for 2013-2014 loads will endure the beating from 2020 components that prompted the new ATX 3.0 power design spec to supersede the previous major update it got to accommodate Haswell.
Those Xeons were released 3 months after Haswell and being a proprietary PSU not easily replaced.
 
  • Like
Reactions: sassmouth

InvalidError

Titan
Moderator
Those Xeons were released 3 months after Haswell and being a proprietary PSU not easily replaced.
Doesn't change the fact that they are Ivy Bridge (v2) Xeons, not v3 (Haswell) ones, so the PSU may still be pre-Haswell stock.

What I was getting at is: it works for now, though there is no knowing how close to not cutting it anymore it might be. It may be sufficiently over-engineered to handle the 3080 for the remainder of the system's useful life or the extra strain may accelerate the aging process on an already worn out PSU and cause issues in a few weeks or months. Short of doing a visual inspoection on the PSU's innards, doing a few ESR measurements on caps and sticking an oscilloscope on outputs under load to look at how well it is doing, only time will tell.
 
  • Like
Reactions: sassmouth

sassmouth

Distinguished
Dec 20, 2010
31
0
18,530
Doesn't change the fact that they are Ivy Bridge (v2) Xeons, not v3 (Haswell) ones, so the PSU may still be pre-Haswell stock.

What I was getting at is: it works for now, though there is no knowing how close to not cutting it anymore it might be. It may be sufficiently over-engineered to handle the 3080 for the remainder of the system's useful life or the extra strain may accelerate the aging process on an already worn out PSU and cause issues in a few weeks or months. Short of doing a visual inspoection on the PSU's innards, doing a few ESR measurements on caps and sticking an oscilloscope on outputs under load to look at how well it is doing, only time will tell.

Thanks for the heads-up, will definitely keep it in mind. At the moment I'm not doing heavy GPU rendering, but likely will do more. And I may replace this machine later this year, so this may just be a bridge solution until then, and then I can use the GPU in the next machine. Hopefully I won't be risking the GPU by using it in this machine. And though I don't want to risk damaging the machine itself, if that's the only risk (I dunno, besides a fire?, which I certainly hope is not the case) then I'm willing to hope for the best.
 

sassmouth

Distinguished
Dec 20, 2010
31
0
18,530
It is a server PSU in a well-known international brand. I'd expect it to "fail gracefully" under most circumstances since I doubt any server manufacturer would last long using PSUs with a tendency to destroy equipment, rendering data recovery difficult if not impossible.

Those are good, positive points. I also bought this machine in 2018 from the Server Store, and it was unused then. So perhaps the PSU is post-Haswell. But even if it isn't, at least it's only got 4 years on the odometer. Fingers crossed. And thanks again! (Hoping I don't update in a week after some disaster.)