News EVGA Abandons the GPU Market, Reportedly Citing Conflicts With Nvidia

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

KyaraM

Admirable
~80% of their revenue comes from gpu sales. Sounds very impressive, until I heard the part about barely breaking even from it all, and profits from power supply sales are far higher... that's wild.
Some of those power supply sales have to be from bundles/combos, so if gpu sales halt, there will be fewer psu sales.

They're losing money from RTX 3080 and up, but making it from 3070Ti and down. Isn't that backwards behavior?



Comments across this site's older ARC news articles, TPU and EVGA forums [and reddit - bleh!]... have given the impression that it isn't enough to have more players.

Even if 1st gen ARC drivers weren't a crapfest... [Their fault for trying to do too much with it.]

Even if AMD's past driver stigma suddenly disappeared, driver updates were more frequent, RT and AI scaling development improved, etc...
[They are working with a fraction of the budgets that Nvidia and Intel have - what are some people bloody expecting them to do??? It's great what AMD is managing to achieve with Ryzen and RDNA, but ~no...]

Many folks won't budge at all with a break in the duopoly; Nvidia would still have a significant lead over the other 2 competitors. The green mindshare is too strong, and I can do naught but read/watch and shake my head...
I would try out Intel cards if I didn't have a great card already. Reason I'm not buying AMD cards again is that they all had severe software and heat issues. Personally, I really want a third option so I'm not locked to a single brand (not talking about AIBs here, actually, but Nvidia/AMD) anymore and actually got a choice. No, as stated above AMD is NOT a choice for me due to bad experiences.
 
  • Like
Reactions: cyrusfox

logainofhades

Titan
Moderator
They could have continue to make some money licensing their brand to some no-name manufacturer. That's essentially free money, a nice paycheck coming in year after year. Now such an option is essentially off the table. All because the CEO wanted his 15 minutes of fame on social media.

The CEO is in his 60's and doesn't want to deal with the stress of dealing with Nvidia, and wants to spend more time with family.
 
  • Like
Reactions: Metal Messiah.

Chung Leong

Reputable
Dec 6, 2019
493
193
4,860
What you are saying is like how AMD made some mountain bikes, that's just a marketing thing.

What I'm saying is HMD Global (I googled the name) makes Nokia phones. If AMD ever decides to exit the CPU market, I'm sure the AMD name would find itself on someone else's CPU. Atari missing out on the chance to put its name on the NES (and later, the Genesis) is regarded as one of the biggest debacles in tech history.
 
What I'm saying is HMD Global (I googled the name) makes Nokia phones. If AMD ever decides to exit the CPU market, I'm sure the AMD name would find itself on someone else's CPU. Atari missing out on the chance to put its name on the NES (and later, the Genesis) is regarded as one of the biggest debacles in tech history.
And while googling it you didn't see that they were nokia in the past, sold nokia to MS and then bought nokia back?!
If evga sells the company to someone else then yes, that someone else can do anything they want. But we are not talking about evga selling the company here.
https://en.wikipedia.org/wiki/HMD_Global
The company is made up of the mobile phone business that Nokia sold to Microsoft in 2014, then bought back in 2016.
 

InvalidError

Titan
Moderator
Do you really think George Foreman has anything to do with the manufacturing of grills bearing his name? And Trump Steaks, Trump Vodka? Individuals and companies lend their name out all the time.
The impeached former president didn't lend his name to vodka and steak, his companies ordered steaks, vodka, etc. to their specifications from manufacturers, then slapped his name on them before eventually filing for bankruptcy. Most PSU vendors don't make their own PSUs either. They send their requirements, specs, cosmetic designs, etc. to Seasonic, CWT, Delta, etc., the vendor checks that their requirements were met and then sell the stuff.

Celebrities lending their name for marketing purposes doesn't carry the same weight since there rarely is any reason to believe the person lending his name had much to with the product beyond the apparent endorsement. Their name's worth and marketability can still be ruined by a bad partnership.
 
  • Like
Reactions: Metal Messiah.

King_V

Illustrious
Ambassador
Reason I'm not buying AMD cards again is that they all had severe software and heat issues.
...
No, as stated above AMD is NOT a choice for me due to bad experiences.

Yeah, might want to temper that first statement and stick with just the last one. The last one speaks to your experiences. The first one states that every AMD card made had these issues. That is false.
 

Eximo

Titan
Ambassador
A while back they stated they would start making AMD cards, maybe that didn't go over well with their current contract when it came to renewal.

Still I find this to be devastating. Like watching wayne gretzky or micheal jordan retire.

I feel like this is only temporary.

EVGA made a few AMD Motherboards, never GPUs as far as I can recall. That would be one way to transition, try and become the top producer of AMD motherboards for AM5.
 
  • Like
Reactions: spentshells

KyaraM

Admirable
Yeah, might want to temper that first statement and stick with just the last one. The last one speaks to your experiences. The first one states that every AMD card made had these issues. That is false.
Errr... no, it doesn't? It's called context. Which can be inferred by reading my statement. Though some of the cards I had were from the time when AMD actually did have huge software issue over a broad range of their cards, so even in that different context the claim wouldn't be completely false... and their cards run too warm for my tastes even today. But I digress.
 

Phaaze88

Titan
Ambassador
I would try out Intel cards if I didn't have a great card already. Reason I'm not buying AMD cards again is that they all had severe software and heat issues. Personally, I really want a third option so I'm not locked to a single brand (not talking about AIBs here, actually, but Nvidia/AMD) anymore and actually got a choice. No, as stated above AMD is NOT a choice for me due to bad experiences.
That's your experience, as unfortunate as it was(sorry to hear that), it doesn't make or mean the entire brand is bad for everyone else, presently or later on.
What about those currently using Radeon without a hitch? Some have bad experiences with Nvidia too, though underrepresented.
There can be heat issues with models from both sides, so that one is a bit lost on me.

Will you still lock yourself to one option(Nvidia) if Intel has those same issues with future generations?
 

JamesJones44

Reputable
Jan 22, 2021
647
581
5,760
I believe they have a board of trustees, right?

They are ultimately responsible for corporate governance, but you're right, the final decision I believe remains with him.

That depends, not every private company has a board. It depends on the structure. In most cases, when a private company is still run by the founder(s) the board can't unseat them like they can when the company is public. Who knows though, maybe he's sold away his share to private equity investors and they will throw him out, it's hard to know with private companies.
 
Last edited:
Sep 10, 2022
113
17
95
Yeah, might want to temper that first statement and stick with just the last one. The last one speaks to your experiences. The first one states that every AMD card made had these issues. That is false.

I own 6800xt, gpu hotspot never runs above 85'c pushing hard on FS - Ultra QHD setting, now that I undervolt it, it's actually boost more, and only run the average of 80-82'c hotspot temp, while the gpu core temp itself stays within 60-70s degree, in PUBG, 1440 qhd ultra-setting, it runs about the same temperatures as well. Playing CSGO, my new fav game, a few days into it as noob, my GPU only runs only at 48-52'c temp with hotspot reaching 58-60'c occasionally.

In term of driver, AMD is more solid on what stated as WHQL not optional on their website, the optional tend to have issue not all users will be affected but I was affected and I learned about that quickly, weeks with WHQL driver as per their support website, never encounter drama not once. It's just, when we agree being the beta testers by keep updating the last drivers that ain't even WHQL certified, just beware, we may encounter issues. AMD always stated this on their RELEASE NOTE, always! Issue here, issue there, just beware of that, wanna run issue free? Simple, use what's known as the last stable driver. It's just like BIOS, people willing to update and install beta version, to complain about issues, are just not smart. LOL
 
Last edited:
Sep 10, 2022
113
17
95
That's your experience, as unfortunate as it was(sorry to hear that), it doesn't make or mean the entire brand is bad for everyone else, presently or later on.
What about those currently using Radeon without a hitch? Some have bad experiences with Nvidia too, though underrepresented.
There can be heat issues with models from both sides, so that one is a bit lost on me.

Will you still lock yourself to one option(Nvidia) if Intel has those same issues with future generations?

If opinion is not biased, NVIDIA tend to have a lot higher temp due to the nature of its boost, it just boost boost boost I see NVDIA owners complain about temperatures everyday. Especially on OCN, they don't OC their card but it's just keep boosting and getting HOT. LOL
 
If opinion is not biased, NVIDIA tend to have a lot higher temp due to the nature of its boost, it just boost boost boost I see NVDIA owners complain about temperatures everyday. Especially on OCN, they don't OC their card but it's just keep boosting and getting HOT. LOL
The biggest problems for Nvidia was GDDR6X temps, not GPU temps. Partner cards were way better than Founders Edition for the most part (though some AIB cards sucked even worse, true). I replaced thermal pads on a Gigabyte Vision 3080 and the 3080 FE, both dropped VRAM temps 10-20C. I rarely see GPU temp on 3080/3090 go above 80C, and often closer to 70C.

Now, if you fire up Ethereum mining (back when that was a thing), Nvidia GDDR6X temps shot up to 110C on many cards, definitely on the Founders Editions. I even tried replacing thermal pads on the 3080 Ti FE VRAM, and while it helped a bit, clearly that cooler design wasn't keeping up with 12 GDDR6X chips pumping out the heat.

In fact, GPU temps did go up on all the cards where I replaced thermal pads on the VRAM. That makes sense, as suddenly the VRAM was able to dump more heat into the cooler and that affected the GPU. But the GPU temps were still in the safe range.
 
The biggest problems for Nvidia was GDDR6X temps, not GPU temps. Partner cards were way better than Founders Edition for the most part (though some AIB cards sucked even worse, true). I replaced thermal pads on a Gigabyte Vision 3080 and the 3080 FE, both dropped VRAM temps 10-20C. I rarely see GPU temp on 3080/3090 go above 80C, and often closer to 70C.

Now, if you fire up Ethereum mining (back when that was a thing), Nvidia GDDR6X temps shot up to 110C on many cards, definitely on the Founders Editions. I even tried replacing thermal pads on the 3080 Ti FE VRAM, and while it helped a bit, clearly that cooler design wasn't keeping up with 12 GDDR6X chips pumping out the heat.

In fact, GPU temps did go up on all the cards where I replaced thermal pads on the VRAM. That makes sense, as suddenly the VRAM was able to dump more heat into the cooler and that affected the GPU. But the GPU temps were still in the safe range.
I have a 3080 FTW3 card and the temps regularly hit 80c while playing games in my small room. The ambient temp sensor (placed in the front intake of my case) in my Cooler Master H500m regularly reads 29-31c. My PC is in a 11x12 foot room and the temp outside is usually a 70-80 F. I live on the coast of So-Cal. With all that being said I have about 300-400w of tech running at idle (3 monitor, PS4, 5.1 speakers). I have found that on my 3080 if I place little heatsinks (these are the ones I used) on the top of the backplate over the approximate areas of the memory chips the overall temps of the card are reduced massively by about 7c when running games and about 12c from idle. I would assume 3090 layout cards with memory chips placed directly on the back of the card and in contact with the backplate would be even more effective.
 
Last edited:

King_V

Illustrious
Ambassador
Out of curiosity, what are the dimensions of an individual one of those heatsinks? They seem overall cube-shaped, yet the listed dimensions LxWxH are listed as: 3.94 x 1.97 x 1.18 inches, which seems WAY off.
 
Out of curiosity, what are the dimensions of an individual one of those heatsinks? They seem overall cube-shaped, yet the listed dimensions LxWxH are listed as: 3.94 x 1.97 x 1.18 inches, which seems WAY off.
The ones I linked are not the exact ones I personally had laying around and used but they seem to be 14mmX14mm squared. I could not figure out what the ones I used were because there are no identifying markings on them and I no longer have the packaging. The ones I specifically used are about 1 inch by 1 inch perimeter at around 3/4ths inch tall fins of aluminum.
*I found the ones I used, or they are so close to what I have they mind as well be the same and updated the post I linked them in.
 
Last edited:
  • Like
Reactions: King_V

Phaaze88

Titan
Ambassador
If opinion is not biased, NVIDIA tend to have a lot higher temp due to the nature of its boost, it just boost boost boost I see NVDIA owners complain about temperatures everyday. Especially on OCN, they don't OC their card but it's just keep boosting and getting HOT. LOL
When there's factors such as differences in process nodes and total board power, how is that a fair comparison?

The smaller the node > greater thermal density > more difficult to cool(may warrant more specialized cooling solutions).

Among the current halo(6950XT) Radeons, many have their total board power capped between 300-330w. Nvidia has matched and passed that with the 3070Ti...
Memory doesn't use much power at all - in the single digits(w), but the higher board power SKUs see them bathe in more heat. GDDR6X having higher operating thermals than R6 doesn't help.
 

SunMaster

Prominent
Apr 19, 2022
158
135
760
When there's factors such as differences in process nodes and total board power, how is that a fair comparison?

The smaller the node > greater thermal density > more difficult to cool(may warrant more specialized cooling solutions).

Among the current halo(6950XT) Radeons, many have their total board power capped between 300-330w. Nvidia has matched and passed that with the 3070Ti...
Memory doesn't use much power at all - in the single digits(w), but the higher board power SKUs see them bathe in more heat. GDDR6X having higher operating thermals than R6 doesn't help.


There's a reason the gddr6x chips get got.

View: https://www.youtube.com/watch?v=HqbYW188UrE
explains gddr6x and power draw. It's absolutely not in the single digits watts.
 
Sep 10, 2022
113
17
95
When there's factors such as differences in process nodes and total board power, how is that a fair comparison?

The smaller the node > greater thermal density > more difficult to cool(may warrant more specialized cooling solutions).

Among the current halo(6950XT) Radeons, many have their total board power capped between 300-330w. Nvidia has matched and passed that with the 3070Ti...
Memory doesn't use much power at all - in the single digits(w), but the higher board power SKUs see them bathe in more heat. GDDR6X having higher operating thermals than R6 doesn't help.

Well I am not trying to compare the two I was just stating fact since someone mentioned how AMD full of thermal and heat issues. And you simply just confirming reasons as in why it’s generally hotter

I said, I know how people complaining about AMD drivers and I also know, how people keep complaining about NVIDIA temp, for whatever they complain it ain’t really my business to find a solution. It’s just, the complain is always there I confused really nothing comes in perfect. If wanting anything really cool and really fast we should all just get those top range cards, water cooled.
 
Last edited:
Sep 10, 2022
113
17
95
The biggest problems for Nvidia was GDDR6X temps, not GPU temps. Partner cards were way better than Founders Edition for the most part (though some AIB cards sucked even worse, true). I replaced thermal pads on a Gigabyte Vision 3080 and the 3080 FE, both dropped VRAM temps 10-20C. I rarely see GPU temp on 3080/3090 go above 80C, and often closer to 70C.

Now, if you fire up Ethereum mining (back when that was a thing), Nvidia GDDR6X temps shot up to 110C on many cards, definitely on the Founders Editions. I even tried replacing thermal pads on the 3080 Ti FE VRAM, and while it helped a bit, clearly that cooler design wasn't keeping up with 12 GDDR6X chips pumping out the heat.

In fact, GPU temps did go up on all the cards where I replaced thermal pads on the VRAM. That makes sense, as suddenly the VRAM was able to dump more heat into the cooler and that affected the GPU. But the GPU temps were still in the safe range.

Man you explained well but mostly don’t get this explanation when asking for help 😂 Most of them gets advised to undervolt only without anyone telling them it is what it is
 
  • Like
Reactions: JarredWaltonGPU

KyaraM

Admirable
That's your experience, as unfortunate as it was(sorry to hear that), it doesn't make or mean the entire brand is bad for everyone else, presently or later on.
What about those currently using Radeon without a hitch? Some have bad experiences with Nvidia too, though underrepresented.
There can be heat issues with models from both sides, so that one is a bit lost on me.

Will you still lock yourself to one option(Nvidia) if Intel has those same issues with future generations?
I didn't say all of their cards are the same and that they don't work for anyone. They would be out of business for a long time by now if that was the case. Still, I feel that I am free to avoid products I have bad experiences with. What you do is your thing. To answer your question, I would do the same with Intel as I did with AMD; three strikes and out.

The biggest problems for Nvidia was GDDR6X temps, not GPU temps. Partner cards were way better than Founders Edition for the most part (though some AIB cards sucked even worse, true). I replaced thermal pads on a Gigabyte Vision 3080 and the 3080 FE, both dropped VRAM temps 10-20C. I rarely see GPU temp on 3080/3090 go above 80C, and often closer to 70C.

Now, if you fire up Ethereum mining (back when that was a thing), Nvidia GDDR6X temps shot up to 110C on many cards, definitely on the Founders Editions. I even tried replacing thermal pads on the 3080 Ti FE VRAM, and while it helped a bit, clearly that cooler design wasn't keeping up with 12 GDDR6X chips pumping out the heat.

In fact, GPU temps did go up on all the cards where I replaced thermal pads on the VRAM. That makes sense, as suddenly the VRAM was able to dump more heat into the cooler and that affected the GPU. But the GPU temps were still in the safe range.
That is quite interesting, my 3070Ti, also with GDDR6X VRAM, runs at around 60-65°C and 70-75°C hotspot, with VRAM temps basically perfectly in the middle of them. No pad exchange required. Highest I have seen was 75°C on average on a very hot summer day I think. Still pretty save. I'm not sure what the issue with bigger cards is exactly, if the issue lies with early models or a general construction error. I don't have a FE card, though, so that might point at the FE being not so well constructed, or at least the pad being bad?

Meanwhile, all AMD cards I had, and all tests I have seen, ran higher. Worst was the card I had that ran to 120°C and then shut down the PC - in GW2, which really isn't all that graphically demanding and never was.
 

bretbernhoft

Prominent
Jan 18, 2022
9
0
510
wherebret.com
My first "real" GPU was an EVGA product; the 1050. And I've been buying their Graphics Cards ever since. So it's a real shame that this company has ended their famously durable GPUs, for whatever the reason is. Oh well, the marketplace will crown a new champion manufacturer. Who that will be, only time can tell.
 
Status
Not open for further replies.