News User claims RTX 4090 16-pin power connector melted on both GPU and PSU side, despite running at 75% power

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
TH also has a nice article about one particular computer repair shop that gets 100s of 4090's every day that are melted. Ya. It's definitely user error. I wonder why users keep making errors on plugging in a 4090, but ALMOST never anything <=4080. Mankind has been plugging electrical cables into sockets for over 100 years, yes suddenly when it comes to a single item, we just are too stupid to figure out how to do it. Good grief
And you believe that? You actually believe that? 100s of 4090s per day, all out of warranty for whatever reason (if they were under warranty they would send them to the AIB / nvidia). 100s? 100s?? Per day? On a single shop? Dude....come on now.
 
  • Like
Reactions: TJ Hooker
....Now I triple dog dare ANYONE to do that with a 4090. I dare you. Go ahead.... hashtag noballs. I dare you.....
No problem, can do that for you, but since mining isn't a thing right now, i'd need an incentive. What do I gain? Ill leave it for 5 days straight 24/7 at 450 watts and send you a hwinfo SS at the end. Or even better I might capture the whole thing on video. But I need to gain something, else im not wasting my time.

I'd be more worried about the VRMs than the cable, in fact im not worried about the cable. At all.
 
The Molex Micro-Fit+ -which 12VHPWR was based on - is specced for 13A per pin. ATX12V uses 6-ganged pins, so 78A total, or 936W per connector, and that's for steady-state operation at 600V (not transients at 12v). The connector itself is well overspecced for its current use case.

As demonstrated in every test thus far, the problem is not the connector design, nor is it connector construction (e.g. soldered vs. crimped, square vs. cropped pins, etc, all tested and proved to have no effect). Partial insertion is the only factor that bas both proved to replicate the effects seen in RW occurrences, and RW occurrences have all had the characteristic marks of partial insertion (visible at the base of the partially inserted pins).

A connector that is simultaneously easier to insert but harder to pull out would be nice, but nothing is foolproof to a sufficiently talented fool. You could install a buzzer that loudly screeches if the connector is not fully inserted, and a subset of users would still snip the buzzer out and run anyway.
 
In case it wasn't clear what I meant since I was a bit vague, I was saying Nvidia was not the one that found out and called it user error. That was multiple other sources, and not just "youtubers". Nvidia then later responded that they had found the same thing, noting that it was the case in 100% of cards that had been sent in.

I don't trust tech-tubers of any kind, because a lot of them have relationships with NVidia that could be tarnished if they came out with some damning evidence against them. Im betting (though I could be wrong) that a techtuber would no longer recieve "free cards for testing" if they said something NVidia didnt want them to say. Those contracts and NDAs are very strict. I wouldn't doubt it if NVidia discovered the problem first, told the techtubers to make videos that they wanted viewers to see, then came out and said "Oh ya, the tech tubers were right. Its user error. Even if this is not the case, even if the tech-tubers did a 100% neutral unbiased objective test, those were short lived. Running on full load for 15 minutes, and then saying "Ya, no melt here. User error" doesn't count. Did any of those tech tubers do a build inside the case, plug it the GPU in once, leave the cable alone and not touch it again like most normal builders do, then run a 2 day full load test? That's what I would like to see. Testing in a controlled environment doesn't really help much. I want to see it done in a real-world typical situation, from the perspective of someone who just bought a 4090 and doesn't know about the melting problem. Let's see how the test goes.

Well as you say, there is a strong bias towards higher power cards, independent of cable style. It takes 2 to tango, you need to pull enough power plus have a fault, before the cable will melt. And other cables melting isn't anywhere near that much rarer than 4090s. 4090 is already vaguely in the 0.05% range is all. For hardware, that's a pretty good rate. Apple/Samsung wish their phones were that reliable.

Exactly, because the problem here.... is that those tiny little pins on the new connector are not designed to safely and reliably output 450W long term without issues, That's why 4090's have this problem, not 4080s or less. This is a design flaw, not user error

Also, I would point out that 1 repair shop is the only one saying it, and they're saying it on their youtube, with links referring people to themselves. That easily puts them below random youtubers ( as low as that is ), and puts them alone. No one has corroborated their story in any way, so I would file that firmly under "suspicious", much like the original melting reports, and the original speculations / etc. for the cause.

I get that, but I would fully encourage someone like LTT or GN to fly down there and do an in person vett of this person and I bet they are telling the truth.

The difference with the failure to plug in, is there is simple physical evidence that anyone can check. When the cable isn't seated properly, it leaves a visible mark on the plug. If you see that mark, it's (primarily) the users fault. This means you can happily ignore / distrust the youtubers, and Nvidia, and repair shops, and the users... and just look at it for yourself.

.....Or..... because of the design flaw, the cable slowly came out over time after the user had completed the builds. Remember, not all the melts happened 1 day after using the freshly built PC. Some of them happened after a year. So a user with 20 years experience, who doesn't know about the melting problem, does a build, plugs the cable in all the way, the same way he has for 20 years building PCs. Over the next 4 months, the cable slowly comes lose, as well, the gamer found a very demanding high-wattage pulling game that he's binge playing to level up fast, constant high load, boom, theres the melt.

It's also worth noting, that there remains only 1 single method of reproducing the issue: Not properly seating the cable. Every single other theory has failed to reproduce the problem, typically barely even making the connector warm.

False. It can be reproduced in many ways, such as what my 20 year-experienced builder did in the previous paragraph, and has happened to a great many builder as well, even ones who did research and knew about the melting problem before building

Wow, complete BS here. "100% perfect down to the subatomic level"? There wouldn't even be a single working one. If you think the connector manufacturers are managing a 99.95% success rate when dealing with subatomic precision, you are out of your mind. In fact, pretty sure quantum mechanics expressly forbids that level of precision.

I didn't think I would hafta say it, because I thought it was quite obvious, but I guess not. So here goes: I was exaggerating.... just a tiny bit.... to make a point

You may notice btw, most of your suggestions are exactly what I said. I then went and checked the details, and surprise! PCI-SIG already did the big ones, improving the pins, lengthening the power pins, and shortening the sense pins. You'll also notice those are specifically designed to prevent the problem of improperly seating the cable. If it wasn't an actual problem, they wouldn't be implementing the fix for it.
This fix has already been implemented in the latest versions of the cards.

A partial fix has been implemented. I'm sure its helped a little bit.... but obviously not, because some cards are still melting

No, plenty of other cards also have not been properly seated. It just wasn't a brand new $1,600 monster GPU and getting splashed all over the internet. They absolutely were there though.

Yes, it has happened to other cards, but not nearly as much as the 4090, because as discussed at the start, power draw is a factor

I actually have been daily driving a 4090 since a week or 2 after release, overclocked at that, and have run entire days of heavy gaming load, as well as benchmarking. People have pulled over 1,000w through the cable, which is MORE than double the default, and done just fine. I know 4 other people that have also been daily driving a 4090 for nearly the same time. ( I worked in the gaming industry, so I know a lot of hardcore gamers. ) My work had systems running them 24/7, no issues reported. Many cloud gaming servers, and other high-end servers can run hundreds of RTX 4090s, again 24/7, without issue.

I don't doubt that the connector can work, if set up a specific way. I'm sure there are plenty of 4090's that are doing just fine. They just haven't come loose yet. It also depends on the case, the form factor, the cable guage, that sort of stuff matters. Bottom line is, this shouldn't be happening at all, or at least it should be extremely rare

Reminder, Nvidia didn't make the connector. It was PCI-SIG that designed it, and ( last I heard ) 2 manufacturing companies that made it. Nvidia just bought the connectors and soldered them on. ( Well, they were also part of the large group that worked to design it, including AMD and Intel. ) And, they DID step right up. They offered an RMA for absolutely anyone with a problem, even if the card was actually a 3rd party card, like PNY, ASUS, Gigabyte, etc.

I didn't know these companies offered RMAs for cards that were broken due to user error. Maybe this is the admission of their error, and maybe all the RMA offers were why the class action lawsuit never happened

I'll tell you myself: the very second i first laid eyes on that connector, BEFORE the 4090 melting problem first cropped up, I knew right away this was going to happen. I took one look at it right and away and I was like..... ya.... there's no way those tiny little pins are rated for 600W. This has to be some kind of joke. Bottom line is this, while the user does have steps they can take to mitigate the problem, they shouldn't have to do that. Plugging in a 4090 shouldn't be a specific, detailed intricate process. At the end of the day, all we are doing is plugging a plug into a socket, something we all have done a billion times. It shouldn't be this hard. You plug it in once, and forget about it until you need to unplug it again. This is how electrical plugs have worked since their invention. A person should be able to just plug in a 4090, make sure the connector piece reaches the end on the socket on the 4090, then not touch it again, run 450W around the clock, go on vacation for 2 weeks with peace of mind. That's how it should be. It isn't this way. It always has been for as long as electric sockets have existed..... right up until the 4090.

This entire problem, from top to bottom, through and through is 100% design flaw and 0.00% user error. It always has been, and it always will be, that's just the fact. The 16-pin connector should never have been created. It's a fire hazard. The 8-pins were a well matured standard and have worked for years. If it ain't broke, don't fix it
[/QUOTE]
 
Last edited:
Personally I wouldn't lower my self to watching popular videos. Unless the people were actually free & private, rational & scrupulous, and knew a few things about science and engineering. Fooltube as they say, nah wont lower my self thanks.

If you don't plug something in fully you can expect a possible fire, like on my 1500watt 240V water distiller, they make it a point to tell you plug in the kettle cord until the white line isn't visible.

NVIDIA should have made it very clear to users: "If you don't plug it in you run the risk of a fire" 40-50Amps is dangerous, even for pros who terminate cables. Fires break out all the time in diy electrical terminations because the user didn't understand the dangers of current.
However I think its foolhardy to think the problem is solely user error (not plugging in fully). Nature isn't that clear cut, usually you have a poor standard and possibly lousy quality control and possibly inserting in 3/4 the way and these interact together in unpredictable ways over a long period.

id advise anyone not to draw conclusions unless its from an actual engineering body of authority that presents electrical theory behind their reasoning. For example no one has mentioned what is the voltage drop in the connector pins under a heavy constant load. The ampacity of a conductor is limited by the insulation and is usually only taken into consideration for fusing the system. But this says nothing about the power wasted heating the wire, which will get conducted away quite well so the whole conductor has a uniform temp. But the important factor is the cross section, that is how much meat is in the pins. Where not talking 13mm2 single core with heavy duty lugs bolted down, Indeed the connector on the 4000 they look far too small to my eyes.
So if there is substantial voltage drop and the pins are not setting right that could lead to unpredictable outcomes not accounted for by laboratory tests. Again, lab tests are one thing, real world is another, in the real world your at the mercy of chaos.
Again, the pros work with actual engineering principles which are based on objective laws so these unpredictable problems don't arise in the first place.
Saying "the connector is rated for x amps" is rather meaningless.

One reason that turned me off the 4090 was I didn't want my life undermined by corporate people playing the blame game. I'm already having major problems with hardware and companies not taking responsibility.
 
I don't trust tech-tubers of any kind, because a lot of them have relationships with NVidia that could be tarnished if they came out with some damning evidence against them. Im betting (though I could be wrong) that a techtuber would no longer recieve "free cards for testing" if they said something NVidia didnt want them to say. Those contracts and NDAs are very strict. I wouldn't doubt it if NVidia discovered the problem first, told the techtubers to make videos that they wanted viewers to see, then came out and said "Oh ya, the tech tubers were right. Its user error. Even if this is not the case, even if the tech-tubers did a 100% neutral unbiased objective test, those were short lived. Running on full load for 15 minutes, and then saying "Ya, no melt here. User error" doesn't count. Did any of those tech tubers do a build inside the case, plug it the GPU in once, leave the cable alone and not touch it again like most normal builders do, then run a 2 day full load test? That's what I would like to see. Testing in a controlled environment doesn't really help much. I want to see it done in a real-world typical situation, from the perspective of someone who just bought a 4090 and doesn't know about the melting problem. Let's see how the test goes.
Definitely don't trust random sources, but do remember that goes both ways. Just because a source is suddenly anti-big company, doesn't make it trust worthy.

While there are some cases your fears are quite founded, Nvidia has traditionally not been one of them. ( BTW sites like tom'sHardware here are just as subject to the same things as any tech-tuber, and again for any magazines, blogs, etc. )

However, a number of companies have been quite negative of Nvidia at various times, but still receive review samples, etc. I'm sure there are various restrictions, and things to sign ( and breaking NDA is a good way to actually make it onto a ban list ), but they haven't seemed to retaliate much for simple negative press.

For example, early on in this cable mess, those same tech-tubers hammered the disaster of melting cables.

It's definitely something to watch out for though, and I'm sure a constant headache for tech review companies.

As for the long duration real world testing, some of the tech-tubers did try that to an extent, but that's rather moot. There have been literal millions of RTX 4090s sold and without issue.

As mentioned, I personally have had a 4090 pretty much since launch, and know multiple other people with 1, and there are many companies running them 24/7. The test has been done in the real world. I have never met a single person that has actually had a melted cable, either personal or business.
Exactly, because the problem here.... is that those tiny little pins on the new connector are not designed to safely and reliably output 450W long term without issues, That's why 4090's have this problem, not 4080s or less. This is a design flaw, not user error
Completely wrong: They are actually designed for significantly MORE than that.
As I said before, they are even capable of 1,000w.
I get that, but I would fully encourage someone like LTT or GN to fly down there and do an in person vett of this person and I bet they are telling the truth.
Yea, I was hoping to find some sort of vetting one way or another, but so far I haven't found anything. No other repair shops have come forward and said one or the other, no one has done a verification or debunking. All I have seen is 1 tech-tuber repair shop advertising his shop while making a pretty extraordinary claim. And extra-ordinary claims require extra-ordinary evidence.
....Or..... because of the design flaw, the cable slowly came out over time after the user had completed the builds. Remember, not all the melts happened 1 day after using the freshly built PC. Some of them happened after a year. So a user with 20 years experience, who doesn't know about the melting problem, does a build, plugs the cable in all the way, the same way he has for 20 years building PCs. Over the next 4 months, the cable slowly comes lose, as well, the gamer found a very demanding high-wattage pulling game that he's binge playing to level up fast, constant high load, boom, theres the melt.
The cable can only come out if it was not fully seated. It has a very firm and secure latching mechanism, as I can personally attest to. I suspect if I pulled the cable hard enough, the connector might pull from the PCB before the latch gives loose. ( Obviously, I'm not going to try that far. However, early on with the issue, I wanted to make sure my cable was plugged in, so I gave it a bit of a pull test and confirmed it was secure. )
False. It can be reproduced in many ways, such as what my 20 year-experienced builder did in the previous paragraph, and has happened to a great many builder as well, even ones who did research and knew about the melting problem before building
I'm not sure who you mean there, though I suspect it was the person that sued, as he made pretty much exactly that claim in the filings.

Except his own pictures clearly showed that the cable had not been properly seated.

I would also recommend caution with him. You might want to look him up more, but I don't want to get into a targeted statement on an individual, but want to stick to known general facts.

More generally, no, no one was able to reproduce the issue without having the cable partially unplugged. This included tech labs: Bends did nothing, overclocking did nothing, damaged cable did nothing, disconnecting 1 of the 4 cables did nothing, the different pins styles didn't matter. There was only ever the cable not fully plugged in that could get hot enough to melt the connector.
I didn't think I would hafta say it, because I thought it was quite obvious, but I guess not. So here goes: I was exaggerating.... just a tiny bit.... to make a point
Making a false exaggeration hurts an argument though, especially on the internet where you can't trust someone actually is using hyperbole or joking.
A partial fix has been implemented. I'm sure its helped a little bit.... but obviously not, because some cards are still melting

Yes, it has happened to other cards, but not nearly as much as the 4090, because as discussed at the start, power draw is a factor

I don't doubt that the connector can work, if set up a specific way. I'm sure there are plenty of 4090's that are doing just fine. They just haven't come loose yet. It also depends on the case, the form factor, the cable guage, that sort of stuff matters. Bottom line is, this shouldn't be happening at all, or at least it should be extremely rare
0.05% is pretty rare. Sure, it'd be nice if it wasn't a problem at all. Heck, that's why I have been following it so closely. I'm "an invested party". If there was a defect in the cable, I would want to know so I could return or replace mine.

But no real defect has been found. It's not as user-friendly as it could be, but there's an important distinction between the 2. 1 is saying it works as designed, the does not. The cable is the latter, it works as designed.

An example would be the body armor I had in the Army. It was a bit heavy, and when you sat down it could be uncomfortable. These were things that could be improved for a better user experience. But, it still could stop the bullets it was designed to stop, as long as it was wore properly. Now, some people might not have worn it properly because of the poorer user experience, but that was them making a major mistake over the top of the minor comfort issue. ( Insert Starship Troopers don't remove helmet. )
I didn't know these companies offered RMAs for cards that were broken due to user error. Maybe this is the admission of their error, and maybe all the RMA offers were why the class action lawsuit never happened
Well, naturally it started for much less "altruistic" reasons: Nvidia just wanted hands on as many as possible so they could investigate any/all possibilities. This even included them asking AIBs to send cards their way.

Once it was found to be user error though, it would have looked really bad to turn around and start rejecting RMAs they had been doing. Presumably, they then determined it was sufficiently rare that it was easier and cheaper to just continue with it as is. Cost of PR. If it had been a big issue, they would have either stopped manufacturing or done a recall, to prevent the costs of endless RMAs piling up.
I'll tell you myself: the very second i first laid eyes on that connector, BEFORE the 4090 melting problem first cropped up, I knew right away this was going to happen. I took one look at it right and away and I was like..... ya.... there's no way those tiny little pins are rated for 600W. This has to be some kind of joke. Bottom line is this, while the user does have steps they can take to mitigate the problem, they shouldn't have to do that. Plugging in a 4090 shouldn't be a specific, detailed intricate process. At the end of the day, all we are doing is plugging a plug into a socket, something we all have done a billion times. It shouldn't be this hard. You plug it in once, and forget about it until you need to unplug it again. This is how electrical plugs have worked since their invention. A person should be able to just plug in a 4090, make sure the connector piece reaches the end on the socket on the 4090, then not touch it again, run 450W around the clock, go on vacation for 2 weeks with peace of mind. That's how it should be. It isn't this way. It always has been for as long as electric sockets have existed..... right up until the 4090.

This entire problem, from top to bottom, through and through is 100% design flaw and 0.00% user error. It always has been, and it always will be, that's just the fact. The 16-pin connector should never have been created. It's a fire hazard. The 8-pins were a well matured standard and have worked for years. If it ain't broke, don't fix it
Completely disagree. As mentioned, your initial assumption is just wrong. And it shouldn't be surprising: This cable is way beefier than cables rated at significantly higher power levels.

I just plugged some worst case scenario numbers into a wire-gauge calculator. At double the power, 5% voltage drop, and maximum allowed temp of just 100 F, you get 20 AWG. The cable is 16AWG. ( At 16 AWG it is good for 13A per wire )
 
I trust Der8auer and GamersNexus.

Both have been honest thus far about tech and when something has a problem, they don't sugar coat it. Take Intel Arc, despite Steve having a good relationship with Intel, he absolutely dunked on and trashed their GPUs for lackluster performance and abysmal drivers calling them out.

Plenty of other examples. Is someone always trustworthy? Of course not but if I were to trust two, they would be it.
 
Why isn't there a lawsuit for this? Else there will be more melted 5090s.

Because 1 damaged connector every 3 months is not a very big problem, no matter how many viral clickbait non-stories are generated about it. Especially when the problem is usually user error. The world is a very big place. There's always something bad happening somewhere.
I had a micro USB port melt on my camera's 3rd party battery charger. Where's my international media coverage? It's not going to happen because nobody cares.

But if you're wondering if, in the face of potential fire, why they haven't updated the 4090 connector to decrease the chances of a problem... They did, like a year ago.
 
Completely wrong: They are actually designed for significantly MORE than that.
As I said before, they are even capable of 1,000w.

1,000W spikes depending on use case? Or 1,000W flat-lined for 2 days? Id like to see someone run that cable at 1,000W consistantly for 2 days straight with no melt on either GPU or cable plug side. Has this been tasted yet? And I dont want it to be a special inch thick custom 0.000002 AWG cable. I want it done from a normal PSU that most gamers would buy, with stock cables.

"The cable can only come out if it was not fully seated. It has a very firm and secure latching mechanism, as I can personally attest to."

The revised connector has that, which helps. But it's not just the connector itself coming lose, its the pins inside that slowly pull back from the inside and come out over time. This is part of the design flaw. Some people have the revised connector plugged in all the way and the melting problem

I'm not sure who you mean there, though I suspect it was the person that sued, as he made pretty much exactly that claim in the filings.

Sorry, I wasn't talking about a specific individual. I was talking about just any generic PC builder with 20 years of experience. Myself, as example, I did my first build in 2003. Ain't no way I would set up a 4090 for a PC build, even if it was given to me for free. To me, it's just not worth the risk. Now, I would run 4090s for mining, because in mining, the GPUs are custom tuned, and you can drop the power draw by 30% or more to a level the connector can handle in all setups.

More generally, no one was able to reproduce the issue without having the cable partially unplugged. This included tech labs: Bends did nothing, overclocking did nothing, damaged cable did nothing, disconnecting 1 of the 4 cables did nothing, the different pins styles didn't matter. There was only ever the cable not fully plugged in that could get hot enough to melt the connector.

Then my question would be: Why would it take some 4090s a year to melt? If it wasn't seated properly upon install, then it the melting should have happened right away. Some gamers took a year to show melting on their 4090s. It's because the pins inside slowly become lose and lose contact over time from the pullback. Again, this is part of the design flaw.

But no real defect has been found. It's not as user-friendly as it could be, but there's an important distinction between the 2. 1 is saying it works as designed, the does not. The cable is the latter, it works as designed.

The defect has been found, but what happened is people noted that there were intricate and unique situations the user could run and the symptoms of the defect would be less likely to manifest. Therefore, its user error. That doesn't follow logically. See, Apple tried to tell that joke when the iPhone 4 had a design flaw that allowed the cell signal to drop if you held the device a certain way. Apple did the same thing: "Don't hold the phone that way". Yes, the user can change how they operate the device so the symptoms dont manifest. That doesn't change the fact that theres a design flaw. Apple tried to claim user error: Dont hold it that way. Ummm... excuse me? Lol

If the cable was used as intended, there shouldn't be any melting. I imagine the intent is that you plug it in all the way, and then move onto the next step in the PC build process and dont worry about it anymore, like we have been doing for as long as PC building has been around. Reason would suggest that things should be no different with the 16-pin. But there are plenty of reports of people who plugged in the new revised connector all the way, with the latch, heard the click in, and later on, their GPUs melted. Again, the pins are slowly coming loose and pulling back. This is a design flaw.

Completely disagree. As mentioned, your initial assumption is just wrong. And it shouldn't be surprising: This cable is way beefier than cables rated at significantly higher power levels.

The schematics of the 8-pin were placed side by side to the 16-pin. Just looking at the drawings alone, you don't even need the explanation. You can see the design flaw. A daisy-chained 8pin is rated for 288W, max (We say 250 to leave headroom, hence, my 2080S I discussed earlier). 2 8-pins is 16 pins in total, rated for 288. Then we come out with a single 16-pin, thats double the rating, with smaller, shorter, thinner pins, that are closer together. At some point, common sense has to get a vote here.

So now the question becomes: Why are some 4090s melting, and some not, even on the new connector that has the latch? Its because it depends on the setup, the form factor, the user case of the GPU, the load intervals, load times, and other variables. In some setups, the pins are more vulnerable to pull back then others, depending on how cables are routed through the case. That's just one example.

In a cable with no design flaw, I should be able to use any form factor, and route my cables however I want without worry about melting. I can do this with the regular 8-pins, but I can not have this peace of mind with the 16-pin.

It is a design flaw, that is it, not user error.
 
How about this?
51BKPQPx5wL._AC_SL1500_.jpg

🤣
Those work safely as well. For EV chargers, the contact surface is far larger than is needed to handle the amperage, but with a lot extra, you are less likely to run into the connector melting if the plug backs out by 1-2mm like with the 12VHPWR.

If they want to fix the 12VHPWR style connectors, they either need to increase the mating surface, e.g., make the pins and connector 20-30% longer for a larger effective contact surface. It would have been better than the 12V 2X6 attempt where they effectively just shortened the sense pins (essentially showing that their focus was building the connector as cheaply as possible).
 
@InvalidError has suggested 24V would be a more cost-effective solution. The USB Consortium also stopped well short of 48 V.
The 240W spec calls for 48V.

The reason I say 24V would be a better compromise is because MOSFETs get rapidly more expensive after 30V. If you want affordable electronics, 24V is the practical maximum for current technologies and prices.

Also, buck regulator DC-DC conversion gets less efficient as the voltage difference between input and output increases due to vanishingly small on-time causing turn-on/off losses to account for an increasingly large amount of energy transferred. In general, you don't want to push buck/boost regulators much beyond a 10X span and 24V is already double that. If you want efficient conversion from 48V to 1V for a GPU/CPU core, you need to use transformers instead of plain inductors. That increases circuit complexity and costs too.
 
  • Like
Reactions: bit_user
@InvalidError has suggested 24V would be a more cost-effective solution. The USB Consortium also stopped well short of 48 V.
BTW, just a quick re-reply to point out that I just heard there is a new USB-PD 3.2 spec which introduces 28V as a new step between 20V and 48V, likely to better accommodate laptops with 6S batteries. Which means you now need new USB cables with updated e-Mark chips that support 28V profiles if you need them. Fun!
 
  • Like
Reactions: bit_user
Because 1 damaged connector every 3 months is not a very big problem, no matter how many viral clickbait non-stories are generated about it. Especially when the problem is usually user error. The world is a very big place. There's always something bad happening somewhere.
I had a micro USB port melt on my camera's 3rd party battery charger. Where's my international media coverage? It's not going to happen because nobody cares.

But if you're wondering if, in the face of potential fire, why they haven't updated the 4090 connector to decrease the chances of a problem... They did, like a year ago.
Viral click bait? User errors? Lol.
 
Status
Not open for further replies.