Question Realistically, what is the PERFORMANCE difference between FEs and AIBs when it comes to NVIDIA cards?

Status
Not open for further replies.
Sep 12, 2022
9
0
10
(This video is pretty much the only resource I could find that looks at thermals/VRM and so on when it comes to differentiating the models, but there are no real-life tests in games to see the practical difference, so, it's somewhat limited).

Hey, guys, with the upcoming 4000 series launching, I have a question and I've searched far and wide and can't seem to find the answer. As we know, the 3080Ti STRIX is the priciest out there, its VRM design is (quoting) - "the best on the market", but the price between a STRIX and another model from another vendor is like ~20-30% higher. It it justified? And, to provide context, I mean that, at most, you can only boost ~100-150 on the clock at best from what I've seen, which isn't more than 10 or so FPS from what I gather...and that's my problem.

There's no data comparing AIB cards and their performance under the same circumstances. Does anyone have a document/video I can look over? In essence, I'm wondering if it's worth paying that "STRIX" premium. I am looking to buy a 4090 the second it launches, I'll be gaming at 4090 and I know that it'll be enough to give me 120-140 FPS at max on all games at 1440p, but, I was thinking - hey, if I pay that "extra premium", I can squeeze A LOT MORE out of the card. Well, that's the assumption at least.

I don't mean to sound rude, I don't mind paying the extra, but, from what I gather, there's not much difference, so, why pay for it?
 
(This video is pretty much the only resource I could find that looks at thermals/VRM and so on when it comes to differentiating the models, but there are no real-life tests in games to see the practical difference, so, it's somewhat limited).

Hey, guys, with the upcoming 4000 series launching, I have a question and I've searched far and wide and can't seem to find the answer. As we know, the 3080Ti STRIX is the priciest out there, its VRM design is (quoting) - "the best on the market", but the price between a STRIX and another model from another vendor is like ~20-30% higher. It it justified? And, to provide context, I mean that, at most, you can only boost ~100-150 on the clock at best from what I've seen, which isn't more than 10 or so FPS from what I gather...and that's my problem.

There's no data comparing AIB cards and their performance under the same circumstances. Does anyone have a document/video I can look over? In essence, I'm wondering if it's worth paying that "STRIX" premium. I am looking to buy a 4090 the second it launches, I'll be gaming at 4090 and I know that it'll be enough to give me 120-140 FPS at max on all games at 1440p, but, I was thinking - hey, if I pay that "extra premium", I can squeeze A LOT MORE out of the card. Well, that's the assumption at least.

I don't mean to sound rude, I don't mind paying the extra, but, from what I gather, there's not much difference, so, why pay for it?

If all you care about is stock FPS/$, there isn't much reason to not get the cheapest 3080 ti available.
But if you value other things like cosmetics, build quality, noise, warranty, overclocking, binning, parts quality, etc then having all the additional options makes a whole lot more sense.

I too wish more emphasis was placed on testing competing designs for the same GPU, but it seems not enough share that interest. I like to buy the GPU's with overdesigned cooling solutions myself, but I can only really benefit from that because I love to overclock.
 
Sep 12, 2022
9
0
10
If all you care about is stock FPS/$, there isn't much reason to not get the cheapest 3080 ti available.
But if you value other things like cosmetics, build quality, noise, warranty, overclocking, binning, parts quality, etc then having all the additional options makes a whole lot more sense.

I too wish more emphasis was placed on testing competing designs for the same GPU, but it seems not enough share that interest. I like to buy the GPU's with overdesigned cooling solutions myself, but I can only really benefit from that because I love to overclock.
Well, the idea here is that I'm wondering if it's worth spending the extra money on the cards, as in, between the 3080Ti FTW3 and the 3080Ti Strix, what difference is there? As you said, I guess there's just not enough interest to test these things, but...why? I mean, how do you then know what to spend money on, knowing that FPS is the main driver behind making a choice (besides money, which is a constraint)?
 
Well, the idea here is that I'm wondering if it's worth spending the extra money on the cards, as in, between the 3080Ti FTW3 and the 3080Ti Strix, what difference is there? As you said, I guess there's just not enough interest to test these things, but...why? I mean, how do you then know what to spend money on, knowing that FPS is the main driver behind making a choice (besides money, which is a constraint)?

Those two specific cards, there are visible differences in part selection and design, notice the difference in the area directly behind the die. The fin density looks higher on the strix, but it's not like they advertise total surface area. But then, the FTW3 is heavier than the strix. Would any of those changes equate to an equal increase in cost? Likely not, but I'm sure the difference is at least measurable.

I think there isn't a high demand for testing these differences because if someone is worried about the value of their card, they're just gonna buy the cheaper one anyway. I'd be curious to see sales volume by variants of card as well, I'd wager the cheaper variants sell much more compared to something like the Strix.

14-126-508-V18.jpg

14-487-547-V11.jpg
 

Phaaze88

Titan
Ambassador
You won't notice it, whether you're running around in an open field, zooming down racetracks, duking it out with/against other players, etc.
It's those blasted benchmarks that make a mess of it all by exaggeration.

the 3080Ti STRIX's VRM design is (quoting) - "the best on the market", but the price between a STRIX and another model from another vendor is like ~20-30% higher. It it justified? And, to provide context, I mean that, at most, you can only boost ~100-150 on the clock at best from what I've seen, which isn't more than 10 or so FPS from what I gather...
1)Yeah, I remember Buildzoid said that too. It's probably true, but there's only so many models people can review... [also money...]
2)Is the price justified? IMO, hell no. ROG line has been overpriced for the longest. Asus also doesn't have an excellent track record for customer service - then again, when they get too big, a bunch of companies 'trim the fat' at CS when they really shouldn't be.

3)I have doubts if that much of a core clock bump is widely stable.
The built in Gpu Boost algorithm does the majority of the boosting for the user - by several hundred mhz - plus, it is free to change the core clock at any time depending on the gpu's parameters. This dynamic behavior can prevent the crash symptoms normally seen with fixed frequency overclocks - except when one goes flat out too high on the core clock increase making it crash right away.
Depending on the product tier, there's an additional core clock bump applied. You can see it by comparing FE base clock to the base clock of other models.
Then there's the wrench that is the power limits; they'll quickly put a stop to how high you can manually push the core clock. The higher you push it > more power drawn > more frequently the power limit is reached > more frequently Gpu Boost dips into lower boost bins.


I was thinking - hey, if I pay that "extra premium", I can squeeze A LOT MORE out of the card. Well, that's the assumption at least.
Yeah, pre-GTX 10 series cards, that was a thing. Since then, Nvidia has locked their gpus down - hard. You're not going to "squeeze A LOT MORE out of the card" unless you go EVGA Kingpin or Galax Hall of Fame.

I'm wondering if it's worth spending the extra money...
This is a personal query of which there is no solid answer. I provided my opinion in the part about price justification.

I guess there's just not enough interest to test these things, but...why?
How many can afford to test 20+ different 3080Tis, etc?
There's only so many review samples going out, of which there is a return window.
Unbiased reviews is something some companies aren't fans of, and thus will refuse to send review samples.

knowing that FPS is the main driver behind making a choice
Cpu IS fps.
Gpu fps is equal to or less than cpu/fps based on eye candy settings. It can never be more than cpu/fps because it is always behind the latter in the pipeline that delivers images to your screen.
 
  • Like
Reactions: KyaraM
Sep 12, 2022
9
0
10
Those two specific cards, there are visible differences in part selection and design, notice the difference in the area directly behind the die. The fin density looks higher on the strix, but it's not like they advertise total surface area. But then, the FTW3 is heavier than the strix. Would any of those changes equate to an equal increase in cost? Likely not, but I'm sure the difference is at least measurable.

I think there isn't a high demand for testing these differences because if someone is worried about the value of their card, they're just gonna buy the cheaper one anyway. I'd be curious to see sales volume by variants of card as well, I'd wager the cheaper variants sell much more compared to something like the Strix.

14-126-508-V18.jpg

14-487-547-V11.jpg

I understand this. I saw a video on the differences in VRM, I just find it super weird that there are never comparisons between these. The price difference between a Strix and a lower-end card is staggering, yet, you don't even know what you're paying for.

Sorry, I know I'm repeating myself, but you said that these are "quantifiable", yet, nobody's shown a graph. In essence, why shouldn't I get the cheapest version if performance is still the same as a Strix, if I'm not looking to do massive overclocking? As a layman, I don't know which card to pick. Assume I had infinite money, but that I was practical, and I'm not looking to spend if there's no quantifiable difference - I still can't know. There's no data. Nobody does these comparisons, yet, I find them vital.
 
Sep 12, 2022
9
0
10
You won't notice it, whether you're running around in an open field, zooming down racetracks, duking it out with/against other players, etc.
It's those blasted benchmarks that make a mess of it all by exaggeration.


1)Yeah, I remember Buildzoid said that too. It's probably true, but there's only so many models people can review... [also money...]
2)Is the price justified? IMO, hell no. ROG line has been overpriced for the longest. Asus also doesn't have an excellent track record for customer service - then again, when they get too big, a bunch of companies 'trim the fat' at CS when they really shouldn't be.

3)I have doubts if that much of a core clock bump is widely stable.
The built in Gpu Boost algorithm does the majority of the boosting for the user - by several hundred mhz - plus, it is free to change the core clock at any time depending on the gpu's parameters. This dynamic behavior can prevent the crash symptoms normally seen with fixed frequency overclocks - except when one goes flat out too high on the core clock increase making it crash right away.
Depending on the product tier, there's an additional core clock bump applied. You can see it by comparing FE base clock to the base clock of other models.
Then there's the wrench that is the power limits; they'll quickly put a stop to how high you can manually push the core clock. The higher you push it > more power drawn > more frequently the power limit is reached > more frequently Gpu Boost dips into lower boost bins.



Yeah, pre-GTX 10 series cards, that was a thing. Since then, Nvidia has locked their gpus down - hard. You're not going to "squeeze A LOT MORE out of the card" unless you go EVGA Kingpin or Galax Hall of Fame.


This is a personal query of which there is no solid answer. I provided my opinion in the part about price justification.


How many can afford to test 20+ different 3080Tis, etc?
There's only so many review samples going out, of which there is a return window.
Unbiased reviews is something some companies aren't fans of, and thus will refuse to send review samples.


Cpu IS fps.
Gpu fps is equal to or less than cpu/fps based on eye candy settings. It can never be more than cpu/fps because it is always behind the latter in the pipeline that delivers images to your screen.

Thanks for the reply. Very good points raised, but, still, there's no data, no measurements to see what the difference between these cards are, even two models would suffice. You say that "imho, it's not worth the money", but what if the difference is really ~15-20%? So, instead of 100 FPS, you'd be hitting 115-120? That's definitely a big jump and, for many people, would make them consider the higher-end models.

That's what I'm saying, if you check my other reply, there's no data on this. I have plenty of cash, but I'm not looking to give ASUS a free $500 "just because". I wanna make an informed decision on what to buy, and I can't. There's no data, nothing to look at to help me.
 
I understand this. I saw a video on the differences in VRM, I just find it super weird that there are never comparisons between these. The price difference between a Strix and a lower-end card is staggering, yet, you don't even know what you're paying for.

Sorry, I know I'm repeating myself, but you said that these are "quantifiable", yet, nobody's shown a graph. In essence, why shouldn't I get the cheapest version if performance is still the same as a Strix, if I'm not looking to do massive overclocking? As a layman, I don't know which card to pick. Assume I had infinite money, but that I was practical, and I'm not looking to spend if there's no quantifiable difference - I still can't know. There's no data. Nobody does these comparisons, yet, I find them vital.

I don't think I said anything was quantifiable. I believe these differences would be measurable, but only if anybody was actually measuring them... which they're not.
I'm agreeing that we don't know the actual performance difference.
It's fair to say that a Strix will perform better than the cheapest 3080ti, because the out of the box boost clock is in fact higher, and the cooler is ridiculous.
The question though is how much better and if its worth it.
 
  • Like
Reactions: impaired_banana
Sep 12, 2022
9
0
10
I don't think I said anything was quantifiable. I believe these differences would be measurable, but only if anybody was actually measuring them... which they're not.
I'm agreeing that we don't know the actual performance difference.
It's fair to say that a Strix will perform better than the cheapest 3080ti, because the out of the box boost clock is in fact higher, and the cooler is ridiculous.
The question though is how much better and if its worth it.

Yep. Sorry to repeat myself. I'm currently going through a lot of videos to try and see matching setups and compare the performances of different games for different cards, but it's so, so much work.
 
Thanks for the reply. Very good points raised, but, still, there's no data, no measurements to see what the difference between these cards are, even two models would suffice. You say that "imho, it's not worth the money", but what if the difference is really ~15-20%? So, instead of 100 FPS, you'd be hitting 115-120? That's definitely a big jump and, for many people, would make them consider the higher-end models.
Yeah, the differences are not anywhere close to that large. Even the highest-end variants of a card are not likely to push more than 5% higher frame rates than a stock-clocked model, and that goes for just about any card released in recent years. Usually, there's not much more than a few percent difference between them, which will tend to be imperceptible in practice. Maybe at the extreme high-end, some would consider it worth paying significantly more for slightly higher performance, but usually one would be better off going with a "lower-end" version of a higher-tier card instead, as far as value is concerned.

Another thing to consider is that the power draw, and in turn heat output of cards often skyrockets just to get that few percent more performance out of the silicon. When you're pushing a graphics chip to its limits, you're taking it past it's ideal efficiency range, and the amount of waste heat increases disproportionate to the amount of additional performance those slightly-higher clocks will get you. So, the cards often need those bigger coolers just to keep the temperatures in check.

Personally, I don't think it's worth paying more than 5-10% over the lowest-priced versions of a card for something like a better cooler. And the recent Founder's Edition coolers tend to be pretty well built themselves. Many of the partner cards will have coolers that manage to remain quieter under load, but again, I wouldn't expect noticeably better performance out of them.
 
Thanks for the reply. Very good points raised, but, still, there's no data, no measurements to see what the difference between these cards are, even two models would suffice. You say that "imho, it's not worth the money", but what if the difference is really ~15-20%? So, instead of 100 FPS, you'd be hitting 115-120? That's definitely a big jump and, for many people, would make them consider the higher-end models.

That's what I'm saying, if you check my other reply, there's no data on this. I have plenty of cash, but I'm not looking to give ASUS a free $500 "just because". I wanna make an informed decision on what to buy, and I can't. There's no data, nothing to look at to help me.
I'm sure there's data on it. The problem is the data is all over the place. On top of that, there's too many variables to consider. No two website uses the same test suite, the same hardware configuration, the same software configuration, and heck, the environmental conditions depend on the day.

Another thing to note is that video cards since the GeForce 10 (and probably Radeon RX 400/500) basically boost to their limits. On the default profile, I've seen my RTX 2070 Super go up to 1950MHz. The best I can get it to was 2025MHz (2050MHz seems to be the limit other people get to without exotic cooling). So if we start from where my card goes to by default, then pushing it harder only gets me 3%. Sure we could point to what the box says is its boost clock speed (1800MHz), but considering every NVIDIA card since GeForce 10 has this boosting behavior, including the FE cards, I don't think it works as a starting point.

Also in my experience, I've encountered scenarios where having high clock speeds doesn't actually improve performance, and yet the card will happily continue burn more power. Also, a bump in clock speed almost never means an equal bump in performance. That is, pushing the clocks 10% higher didn't mean I got 10% more FPS.

I also don't believe in buying a card for its "potential." I buy a card for what it can do for me as-is.
 
  • Like
Reactions: KyaraM

KyaraM

Admirable
My Gainward Phoenix outperforms 90% of 3070Ti's out there if I really want to and that's a rather cheap card with hard power cap at 290W, especially compared to the Strix, essentially an FE with custom cooler. It's still topping 60-70% without trying. Go figure.

Personal opinion, go with a cheaper one and tune the heck out of it. The differences between good and bad cards are single-digit FPS most of the time. For example, unless cooling is horrible, pretty much every 3070Ti should be able to hit 1900MHz most of the time even if it only guarantees 1770MHz or something. From there, I can get my own up to almost 2100; comfy-OC is 2025MHz with peak to 2050. They all have the same chip, only difference is binning, which is basically all the difference and results boil down to silicon lottery. And again, that's a cheapish model I use. However, the kicker. I limited it to 200W. And it still has almost the same performance as before minus, maybe, 2-5 FPS on average depending on the game...

I used the 3070Ti here as an example because that's what I run. But the same applies to the 3080Ti or any other GPU, really.
 

mightmaster92

Honorable
Oct 5, 2018
39
3
10,535
Just my personal thoughts, not really scientifically proven anyhow:

From what i understand performance difference between so called 'worst' and 'best' cards within the same model is something like 5%. Biggest difference comes in cooling solution and component/build quality.

My own conclusion points out that rather than getting 'best' card, you can get 'average' card of the next model ...

Nvidia's FE cards are kinda 'average', definitely no strix.
 
  • Like
Reactions: KyaraM
Remember, the OP is looking to buy a 4090 card, not a 3080ti.
My opinion is that you get mostly what you pay for until you get to the very top overclocked versions.
And, the 4090 cards are going to be expensive and in short supply.
Early adopters will bid the price up regardless of the MSRP, and scalpers are going to scoop them up so normal mortals will have a hard time buying one at all.
 
Sep 12, 2022
9
0
10
Thank you all for your inputs. I've now spent a lot of time researching videos and links posted by you and the "objective" (I just can't really be arsed to post all my messy .txt file) answer is:

If you're looking to overclock to <Mod Edit> and beyond - go with Strix. Their VRM, cooling and so on is top-notch. However, looking at 3080Tis at ~2100Mhz++, the performance gain is, at best, 10%, some videos show it at even lower.

Just buy from a reputable AIB. It seems EVGA has the best customer support around and have no issues respecting their commitments world-wide. You will lose ~3-4% on FPS, since you can't overclock as much.

To note that this is information for all the past generations. 4000 might be different, but the point still stands.

Oh, also, you're getting that 3-4%+ and paying +100-150W easily, so, yea, there's that as well.
 
Last edited by a moderator:
Status
Not open for further replies.