News Amazon: Don't Blame New World for GPU Deaths, Blame Card Makers

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Falkentyne

Distinguished
Sep 22, 2008
75
7
18,665
they do. called they listen to the rules of their control program.

look at jayz video to ez example.
his gpu was NOT able to go over 102% power even with a hacked bios...it refused.

yet New World had it over 111%.

what part of the game ignores the gpu's limits don't you understand?

issue is with it taking much more power than ti can handle.

your analogy on cpu is apples to oranges comparison. similar but not same.
a cpu is MUCH better equipped to deal with sudden surges and peaks.

you cant "limit" power in sense you want to as its out of spec and no "failsafe" can stop sudden power spikes. (same way if u have a storm surge your powerlines and you arent usign a proper surge protector.)

Wrong.
Jay exposed a bug in MSI Afterburner versus the eVGA Bios. On that BIOS, Afterburner was reporting TDP Normalized% as TDP. Jayz doesn't seem to believe in Hwinfo so he probably doesn't know or care. If he had bothered to check on GPU-Z and compare it to MSI Afterburner, he would have seen there was something off.
 
  • Like
Reactions: TCA_ChinChin
Oct 9, 2021
3
0
10
So far, all of the cards that have died seem to have Alpha and Omega (AOZ) power stages (50A?). The eVGA (revision 0.1 PCB) and many Gigabyte cards have these power stages.
The new revision 1.0 PCB eVGA Cards seem to have Onsemi power stages and none of them have been reported as blowing up.

The Founder's Edition (3080 and 3090) have some smart power stages but I don't remember what they're called but they're not AOZ, and they're 70 amp stages per phase.
I am kinda interested to know more about this. from the info i gather the power delivery on the cards is what is failing. Is there a way to find out what the power stages are and what their quality is like on my card? I am not sure what to look for.
 
That's fine for users who have unlimited disposable income but is far from addressing the immediate issue. And most of us are on a budget. Odd response..

This entire paragraph is speculation. Perhaps the rest of us (without unlimited disposable income) are trying to find facts versus speculate on what might be the case..

Raised.. Do you work for Amazon? Very partisan, pro-amazon responses on a forum which was, 20 years ago, pretty bipartisan.. Everything is for sale these days but the salesmanship is a bit lacking here..
All I can say is "context is king". Your answers are based on reading my statements out of the context of the person I am quoting. Are you doing that on purpose?

Stating fact has nothing to do with bias. The question(s) I posed are rhetorical in the sense to, well, ask for obvious things that should come to mind while analyzing this situation.

This is a good quote, by the way: "Never ascribe to malice, that which can be explained by stupidity".

Regards.
 

ElMoIsEviL

Distinguished
The hardware running above given limits is literally what boosting does. The difference is that GPUs are designed to boost in short spurts, not be running in boost mode all the time.
And New World doesn't control your Graphics Cards Boost clocks. nVIDIA's sensors coupled with their software controls Boost Clocks.

Why should we believe that they didn't "hack the hardware drivers" when literally no other game on the market is behaving this way? Amazon is effectively banking its entire future as a game studio on this one title, so it's in their best interest to deny any and all guilt in PR nightmares of this magnitude.
You're reaching here.
 
  • Like
Reactions: TJ Hooker

BeedooX

Reputable
Apr 27, 2020
70
51
4,620
Or just use an AMD card which doesn't have any issues with the game?

I mean, you're basically advocating for nVidia's wet dream here; zero accountability demanded from them or the AIBs. Passing all the blame to Amazon and, more importantly, to the developers of the game that just use cards to develop and test. Who knows, maybe Amazon did tell nVidia about it and they didn't do anything? AMD doesn't seem to have an issue with the game, why would Amazon need to do anything?

I call BS on everyone not realizing this is nVidia's and the AIB's problem and not Amazon.

Regards.
It's probably Nvidia's fault; they likely had to fully open the taps to stay competitive with RDNA2. :D
 

Diceman_2037

Distinguished
Dec 19, 2011
53
3
18,535
the game
ignores
gpu power settigns.


how do hardare makers do what you say when the game devs say <Mod Edit> you we aint listening?



eli5 version: imagine you say u can have this gun but only hunt animals and give it to a serial killer. You made it for animals yet the user ignores ur rules.

Games can't ignore power settings, you have no idea what you're talking about.

Whats happening is that the variating load too frequent for the analog buck controller to handle and the system is in a cycle of buck and boosting to recharge capacitors and keep the vrm fed,

The game is only behaving like every mmo ever, and other mmos have resulted in bad 30 series cards dying too.

These cards are bad. educate yourself, get a basic understanding of electrical circuits and then come back to the table.
 
So much ignorance in this thread, oh my goodness...

This is a hardware/driver problem with nVidia and their AIBs. Period.

New World can be coded like crap, much like any school/uni project can. It should not be able to override or bypass HARDWARE protections. How is that hard to understand?

In Jay's video, he only found a bug with Afterburner and EVGA cards. He should've been using Precision instead. And no one here has any idea how the game itself was coded. And it shouldn't matter. Power Virus or not, it must not be able to kill a GPU so easily.
Yeah, I think there are a number of people who lack understanding of how modern computers work. Hardware and its drivers simply shouldn't allow software to exceed the capabilities it was designed for. If that happens, then it's a problem on the hardware manufacturer's end, not the software developer's. If a game manages to make a graphics card exceed its power limits, then that's the fault of the hardware manufacturer, either Nvidia or its board partners like EVGA. And even EVGA was claiming that the problem was down to inadequate soldering on their end, though that might just be the point of failure, while some other part of the hardware design may be at fault for allowing power draw to reach levels exceeding what the solder could handle.

If anything, people should be happy that this faulty hardware got exposed while it is still under warranty, rather than having the same faults triggered by other software a few years down the line. Games are only going to get more demanding over time, and if it's not New World exposing these problems, it's likely to be some other piece of software eventually. I imagine there is already other software out there that can trigger the same issue, but people are more likely to encounter it in this game simply because it's a new release with a large player-base, something that owners of a new high-end graphics card are very likely to try.

That's fine for users who have unlimited disposable income but is far from addressing the immediate issue. And most of us are on a budget. Odd response..
Anyone with limited income should not even be considering an RTX 3080 or 3090, especially with the way the pricing and availability of these cards has been since launch. Even if someone managed to snag one of these cards for the MSRPs Nvidia suggested they would be, that would still be at least $700 for a 3080, $1200 for a 3080 Ti or $1500 for a 3090. The 3090 in particular was an incredibly poor-value card at launch, being only slightly faster than the 3080 at over double the price. And cards of this level are not even a necessity to run any game on the market today. These cards are luxury items that people pay a premium for to run games at settings that give them minor improvements to visuals, not something anyone needs to play games.

It's probably Nvidia's fault; they likely had to fully open the taps to stay competitive with RDNA2. :D
That might not be an unreasonable assessment. The power draw of the RTX 3080, 3080 Ti and 3090 are beyond what has been standard for "enthusiast-level" graphics cards for years. Typically, cards in this class top out around 300 watts, but even the Founders Edition variants have 320-350 watt TDPs, with many real-world gaming scenarios exceeding those levels. Partner cards with heavier factory overclocks can even exceed 400 watts. The board partners may have simply not had enough experience testing the power delivery systems for cards of this power level.

And it wouldn't be surprising at all if Nvidia chose to up the clock rates for these parts to be higher than initially planned once they discovered that AMD was actually going to have competitive products at the high-end again. The 3080 could have easily been configured as a sub-300 watt part if they weren't concerned about cards like the 6800 XT outperforming it. Or Intel's upcoming cards for that matter.
 
Darn... I bought an EVGA RTX 3070 for my new PC. Here I was under the impression EVGA was one of the better GPU companies.
They are. They honor their warranties like few AIBs do.

As many say: "it's not what you do when you're winning, but when you're losing that defines how good you really are". In this context, they've (from what I've read everywhere) honored all warranties for the cards which have died with no problems to users.

Regards.
 
  • Like
Reactions: COLGeek
So this video talks about how yes, some applications can violate the TDP spec of the video card, but through transient spikes. The boost algorithm/power limiter reacts based on average power, so it allows spikes to go through. He later explains that the GPUs will just suck up whatever power it can from the VRM and one of three things happen if it goes over the TDP spec:
  • Voltage drops too low and the GPU is unstable, causing a crash
  • Something gets too hot and catches fire
  • If there's any sort of protection mechanism, it kicks in and shuts down the video card
And he noted that in what he's seen, whatever protection there is is "generous in the wrong direction." Or it seems like the protection is "save the GPU, to hell with everything else."


My take on this is if you have a high-TDP card, tweak the V-F curve, set a power limit, do whatever you can to not let it go full throttle. As an aside, I did find that in one of the games I play frequently, Final Fantasy XIV, I can reduce the power limit to 80% on my RTX 2070 Super and experience almost no performance loss. So whatever my card is doing with the extra 20% power, it's pissing it into the wind.

And if you want me to point fingers, I'm pointing them at the AIBs primarily and NVIDIA second. AIBs since they still design the cards and NVIDIA for having this boosting algorithm. I'm not blaming Amazon despite the video proving some applications can violate the TDP spec because hardware should not allow software to kill it.
 
Last edited:
Status
Not open for further replies.