Review Nvidia GeForce RTX 5070 Ti review: A proper high-end GPU

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
This has been a fun read.

Plenty of things I'd like to comment on, but I'll just keep it brief:
1- YouTubers incentives -> they're not that much different from any publication with a hard editorial line. Some "old school" reviewers and journalists have the luck to being free compared to others under different publishers and editorial peeps. The "angry" reviewers thumbnail is the exact same game as a sensasionalist title in an Article, so glass house/ceiling and all that (not this particular case, mind you; I liked the "entry door sign"). Then what matters is the content and not dismiss the information due to prejudice as much as you'd dismiss an article due to its title. To each their own, but this part of the conversation I read was a bit... Flawed.

2- Prices -> This is not a single-party fault or a new problem as many have alluded to. If you're shocked by this, then you're new around here, aren't cha? Thing is, and this is where I can sympathize and align in a reactionary manner: it sucks and there's no sign it'll stop sucking. It is a now very known trend which nVidia and AMD are more than happy to continue. Just because there's nothing you can do about it, it does not mean you have to feel good about it. If you're "fine with it", well, sucks to be you I'd say.

3- Performance -> The whole RTX5K launch can be summarised in this single statement: "you could have gotten the same level of performance about 2 years ago for cheaper". That's it. That is the headline title. Does the new things nVidia put in Blackwell and developed under DLSS4 change things enough to make them "proper" value adds? As mentioned before, to each their own. Fortunately, for me, thye don't move the needle. The things I personally care about, nVidia doesn't, so we're not aligned. Small violin, for sure 😀

Regards.
 
As a follow up to this:

Newegg had MSRP listings, but they never went in stock. Only the $900+ listings showed as in stock during this launch.

*Edit: It really felt like "These listings are for compliance, and to get people off our backs. The ones for sale are the actual cards though."
 
5080 is a $1000 graphics card and I would have preferred it to get 24GB (8 x 3GB chips) rather than 16GB.
Exactly what i had in mind. When i first read 5080 specs i was shocked by abnormal RAM gap between it and 5090. Second most powerful card in lineup should not have 2 times less VRAM.
You are absolutely right and 5080 should have 24GB. Problably we will be graced with 5080Ti Super in the future....
 
It's so ridiculous.
Jarred lives in a fantasy world where MSRP makes sense when it's just a totally <Mod Edit> marketing thing made to get good reviews.
 
Last edited by a moderator:
  • Like
Reactions: Devoteicon
What Jarred spoke of is called ragebaiting, and it's very true, because it's everywhere--not just pertaining to YT, but on social media, even on these forums--and yes, in some THW pieces. Ragebaiting is effective in getting user participation, which translates to clicks and eyeballs, which in turn translates to ad revenue.
And that's the very reason I look at GN and often end up shaking my head at the very obvious raging. I don't get that from any of Tom's reviews. None!
 
The truth of the matter is nvidia won't say what is really going on.
It is business, the majority of the silicon is going to AI clusters and databases for the big money and they don't really care about gaming.
Also: Look at the cult members lining up regardless of how many boots to the tailbone nvidia hands them.
Why would they have to change what they are doing?
 
  • Like
Reactions: oofdragon
Jarred was speaking of his own work, not of THW as a whole.

What Jarred spoke of is called ragebaiting, and it's very true, because it's everywhere--not just pertaining to YT, but on social media, even on these forums--and yes, in some THW pieces. Ragebaiting is effective in getting user participation, which translates to clicks and eyeballs, which in turn translates to ad revenue.

But there are downsides to getting people frothing from the mouth, some of which we've seen here in this very thread.

Hence the whole preamble you ignored.

There's always a personal choice when information is presented at you. Some do this conciously, others unconciously: filter what you consider noise.

Remember: both written and in video form, data is presented at you. It's up to you what is "fluff" and what is relevant.

I stand by the "glass house/ceiling".

I'd say it's the opposite, feeling "bad" about rising prices for what are basically tech toys would suck. Life has enough stress and aggravation without worrying about trivial things like a GPU so you can play games better. Save your worries for things that matter, like keeping a job in these trying times, or stretching the budget to put food on the table.

Forums like these are thought silos, where unimportant things get amplified and relentlessly hashed over. It's a tiny percentage of gamers who'd camp out on launch day and obsess over every tidbit of their tech toys, then to whine about how the new toys are overpriced, or not fast enough. We are not typical gamers. Our concerns are not typical, outside of these confines.

I don't completely disagree. My only observation about what you're saying is around your/our own inclinations and personal way of expression. Some people need to vent and need to rage, others can do it in a less vitriolic way/manner. I don't disagree it can get out of hand to "insufferable" territory rather quickly, but at the same time, don't go to the opposite end and keep all the frustration inside. Find your valves and use them wisely.

We definitely agree the situation sucks (or so I understood), so the crux is on whether you should vent the frustration or "eat it silently", heh. You just try to not let it bother you under the context/reasoning of "it's just games". Well, I game a lot, so it does affect my livelihood significantly when my free time is spent in VR or regular gaming. Whatever hobby you engage in and conditions change for the worse and the trend is everything will get even worse down the line, well, your hobby won't be as relaxing, fun or engaging anymore, would it? Again: I don't completely disagree with what you said, but I'm more of the whiney, vocal type. Babies don't get attention otherwise!

Also, you should learn how to quote in-line with the post (use the BB code, in short). Please do take this in a positive way, as it's easier to spot when you reply to someone.

Regards.
 
  • Like
Reactions: helper800
As a follow up to this:

Newegg had MSRP listings, but they never went in stock. Only the $900+ listings showed as in stock during this launch.

*Edit: It really felt like "These listings are for compliance, and to get people off our backs. The ones for sale are the actual cards though."
I don't think anyone really thought the cards wouldn't sell out quickly / immediately. There's just too much of a backlog of people willing to pay more. What we really don't know is how many cards were actually sold from all the major places. Someone at Nvidia said there would be far more 5070 Ti than there were 5080. I don't doubt that. But how many 5080 were there?

If we take the whole GPU market, something like ~30 million discrete graphics cards get shipped each year (not even counting the 2020–2022 spike). So, when the 40-series supply ran dry — and that applies to everything from 4070 up through 4090 — back in December, that created a massive amount of unsatisfied demand. Even if the $500+ market only accounts for 10% of all GPUs sold (just a number I picked), two months worth of sales would work out to around 500,000 graphics cards.

So potentially worldwide, conservatively 500,000 people and businesses that would have otherwise bought a new graphics card probably weren't able to do so in the past two months. But Nvidia sells around 2~2.5 million GPUs per month (for consumers), across all lines.

Looking into it a bit more, given the breakdowns on Steam, it's probably about 1/3 of Nvidia GPUs per month in 2024 were for RTX 4070 and above. Which means that 500K estimate I just made would be woefully short of reality! 2.25 new Nvidia million GPUs sold per month, Steam suggests 11.5% of all GPUs are GPUs listed are 4070 and above, and 26% are RTX 40-series. If that's even remotely accurate, we could be talking about as much as 44% of the ~2.25 million per month figure. That would be one million upper midrange to extreme GPUs per month, normally!

And that gives a sense of scale for the problem with running short too early. Even if Nvidia made 250,000 RTX 5080 GPUs for the launch, in 2024 it seems like it might have been selling half that amount every month for the RTX 4080 / 4080 Super. More people will want to upgrade to a new GPU when it comes out, and so the inventory is immediately gone.

Frankly, I could be off on some of these numbers by an order of magnitude, because when JPR as an example says Nvidia sells ~7 million GPUs per quarter, it doesn't give a breakdown on pricing or categories. Maybe 90% of those are old GPUs for laptops. I also don't have inside access where I could actually say how many wafers Nvidia uses per month from TSMC, and how many of those are for GeForce. But it's a lot (200K?), and I'm sure that the ~2,300 5080 GPUs sold across all of Micro Center are only a fraction of the global supply that was available at launch.
 
I don't think anyone really thought the cards wouldn't sell out quickly / immediately. There's just too much of a backlog of people willing to pay more. What we really don't know is how many cards were actually sold from all the major places. Someone at Nvidia said there would be far more 5070 Ti than there were 5080. I don't doubt that. But how many 5080 were there?


We know that the 5090 and 5080 were largely vaporware, most stores only had a few to sell. Remember a GH100 GPU is 814mm2 worth of TSMC 4N Silicon and sells for something like 32 or 33 grand. Blackwell is on 4NP with the 5090 using the GB202 at 750mm2 worth of TSMC 4NP silicon and has 92.2bn transistors. The GB100 die doesn't have an official size yet but is said to have 104bn transistors and is rumored to sell for 35 to 40 grand.

https://www.tomshardware.com/news/t...aled-300mm-wafer-at-5nm-is-nearly-dollar17000

As reported by toms a single TSMC 300mm2 4N wafer is going to cost approximately 17 grand. nVidia only gets so many of them per year because TSMC has other customers too, like AMD, Intel and lets not forget Apple. From that wafer how many 50 series GPU chips can they make vs how many GB100/GB200 can take make and what is the profit margin on each? We are talking almost an order of magnitude difference in profit per mm of precious silicon allotment used.

When we compare the two it's not even close, nVidia is losing a ridiculous amount of potential revenue for every GPU they make. They could shut down their entire consumer division, cancel all GeForce development and end up making even more money then they already are. I'm positive that they are only making GPU's because Jensen Huang likes to wear leather jackets on stage and claim to be "for gamers". I'm also pretty sure their atrocious pricing model is that way to justify wasting precious silicon on cheap "gamer" GPUs. And yes $5,000 for a GB202 is "cheap" when compared to the $40,000 for a GB100 and $70,000 for the GB200.
 
It is business, the majority of the silicon is going to AI clusters and databases for the big money and they don't really care about gaming.
Also: Look at the cult members lining up regardless of how many boots to the tailbone nvidia hands them.
Why would they have to change what they are doing?
You act like you understand what is going on in the first part of your post, then demonstrate you don't in the 2nd part.

Nvidia doesn't need to sell gaming GPU's any more. If people don't buy them at the prices Nvidia wants to sell them, then Nvidia isn't going to drop all the prices. They're going to stop selling them. Your choices are pay higher prices or find another hobby.
 
  • Like
Reactions: helper800
When we compare the two it's not even close, nVidia is losing a ridiculous amount of potential revenue for every GPU they make. They could shut down their entire consumer division, cancel all GeForce development and end up making even more money then they already are. I'm positive that they are only making GPU's because Jensen Huang likes to wear leather jackets on stage and claim to be "for gamers". I'm also pretty sure their atrocious pricing model is that way to justify wasting precious silicon on cheap "gamer" GPUs. And yes $5,000 for a GB202 is "cheap" when compared to the $40,000 for a GB100 and $70,000 for the GB200.
The GPU's in Nvidia's gaming GPU's are also used in their A series workstation cards which MSRP for up to $9000. While your overall point stands that NVidia's is losing comical amounts of money producing anything but enterprise AI accelerators. Nvidia isn't producing GPU's so Huang can wear leather jackets on stage.

https://www.dell.com/en-us/shop/del...raphics-card/apd/490-bjms/graphic-video-cards
 
We know that the 5090 and 5080 were largely vaporware, most stores only had a few to sell. Remember a GH100 GPU is 814mm2 worth of TSMC 4N Silicon and sells for something like 32 or 33 grand. Blackwell is on 4NP with the 5090 using the GB202 at 750mm2 worth of TSMC 4NP silicon and has 92.2bn transistors. The GB100 die doesn't have an official size yet but is said to have 104bn transistors and is rumored to sell for 35 to 40 grand.

https://www.tomshardware.com/news/t...aled-300mm-wafer-at-5nm-is-nearly-dollar17000

As reported by toms a single TSMC 300mm2 4N wafer is going to cost approximately 17 grand. nVidia only gets so many of them per year because TSMC has other customers too, like AMD, Intel and lets not forget Apple. From that wafer how many 50 series GPU chips can they make vs how many GB100/GB200 can take make and what is the profit margin on each? We are talking almost an order of magnitude difference in profit per mm of precious silicon allotment used.

When we compare the two it's not even close, nVidia is losing a ridiculous amount of potential revenue for every GPU they make. They could shut down their entire consumer division, cancel all GeForce development and end up making even more money then they already are. I'm positive that they are only making GPU's because Jensen Huang likes to wear leather jackets on stage and claim to be "for gamers". I'm also pretty sure their atrocious pricing model is that way to justify wasting precious silicon on cheap "gamer" GPUs. And yes $5,000 for a GB202 is "cheap" when compared to the $40,000 for a GB100 and $70,000 for the GB200.
Technically, only GB200 is on 4NP — GB202 and GB203 and GB205 are all still on TSMC 4N. Which is mostly just about the number of full metal layers I suspect, or some other tweak, as they're all using the same equipment. But yes, I agree with the rest and have said it before. It's not that Nvidia won't make any consumer GPUs ever, but they're second tier priority now. Or really, third tier after professional GPUs like Nvidia L40 and such.

It's not necessarily an order of magnitude difference in profit, though. The costs associated with creating B200 are significantly higher than with a GB202 graphics card. Probably the chips for GB202 are only about $250 per chip. For GB200, it's maybe $285 per single chip, but two are needed... and then there's a bunch of advanced packaging. It's actually probably a lot less than that, because the $17,000 figure was from 2020 and prices do come down on a mature process (which 5nm-class from TSMC now qualifies as). Anyway, B200 then gets slapped onto a full GB200 board with two CPUs and four GPUs, which costs... I'm not sure. LOL

There are a bunch of places saying it's $3 million for a full NV72L setup, but I don't know if that's actually correct. A lot of them seem to trace back to the same original post, which was a Barron's price estimate, not an actual price quote. $3 million for a complete rack with 18 GB200 servers, 9 NVLink units, and all the other stuff? That actually sounds pretty damn cheap and I'd expect it to cost closer to twice that much! ("The more you buy, the more you save!" LOL)

$3 million, though, divided by 9 would be $333K per two GB200 servers and a single NVLink unit. Let's just estimate $33K for the NVLink unit and other stuff, leaving $300K for the two servers. $150K each, with two CPUs and four GPUs, plus RAM and storage and power? Like I said, I'd be surprised if the actual price is "only" $3 million. But if it is, that would only be maybe $25~$30K per B200 GPU ($50K for everything else estimate), which uses two dies that are each larger than the GB202, plus more expensive HBM3e memory and packaging.

If the net price ends up at $12,500 per B200, and that's two 814mm2 chips? It could be more cost effective to make professional GPUs that sell for $9000. Assuming there's enough demand for those, as opposed to demand for B200 stuff.
 
  • Like
Reactions: helper800
I get the desire for fire and brimstone and random invectives but I feel they should not be aimed at the review (let alone the reviewer). And happily, most of us live in capitalist societies: don’t like it for the price? Don’t buy it. If the market agrees, the prices will drop. The reviewer can hardly review the card based on the price any given individual may purchase the card for. MSRP works. If prices soar above MSRP, don’t buy it.

I could do with 16GB VRAM; my 4070 isn’t great pushing my 4K monitor for games. So I’m going to wait and see what happens with the 9070 series and with prices (and stock). And I’ll buy this year. Or not. And if not, then I’ll buy the super variants or 9075 or Celestial or whatever is available next year. Or not. 60x0 series cards in two years +/-? Fine. I won’t like waiting that long but it’s for games and the options and alternatives are legion.
 
I'm sure NVidia is irrevocably saddened by your stance and will change their ways going forward.
Over one person? Unlikely. Over an entire industry as people gradually get fed up and stop buying their products? Then perhaps. On the otherhand, they'd probably just say FU to gamers overtly at that point and switch their product stack entirely to AI only
 
It's not necessarily an order of magnitude difference in profit, though. The costs associated with creating B200 are significantly higher than with a GB202 graphics card. Probably the chips for GB202 are only about $250 per chip. For GB200, it's maybe $285 per single chip, but two are needed... and then there's a bunch of advanced packaging. It's actually probably a lot less than that, because the $17,000 figure was from 2020 and prices do come down on a mature process (which 5nm-class from TSMC now qualifies as). Anyway, B200 then gets slapped onto a full GB200 board with two CPUs and four GPUs, which costs... I'm not sure. LOL

I didn't say it was, it said it almost was and order of magnitude. Likely it's around 7 to 8x the amount of profit per wafer of used silicon.

It's important to use silicon because that is what is holding them back right now. nVidia products are on back order for twelve to eighteen months because TSMC can not supply enough silicon. Wafer prices have gone up now down because there is far more demand for them then supply. nVidia, AMD, Intel and not to mention Apple are all demanding higher allotments from TSMC. Apple is important here because they have a tendency to drop astronomical amounts of money on TSMC to pre-purchase whatever new process is going to power their next chip. If TSMC wasn't the limiting factor nVidia could be pushing out even more product then they already are. This is why every single consumer GPU is actually a loss in potential for them as that silicon could have gone to one of the big customers waiting twelve plus months. From a pure business point of view, selling any consumer cards is lost revenue. Of course from a marketing point of view, it makes sense to not entirely abandon the sector that built you up.

For nVidia the higher up the product stack you go the higher not lower the margins. Their full datacenter racks are by the most profitable but not everyone wants that so they offer many other lower tier products for integrations. In either case any one of those products is far more profitable then a single 5090, much less the 5080 or 5070's that will follow. There could be an argument for the 5060 from leftover silicon that either wasn't good enough quality or wasn't large enough to make a HB100.

Also the B100 and B200 (just two B100's together) are TSMC 4NP, the exact same as the rest of the 50 series. The H100 and H200 are on the TSMC 4N node, the same as the 40 series. Basically the X100 die's are the enterprise datacenter chips for the series with the X200 dies being the consumer ones for that same architecture. This is why I said that the 5090 is just a mini datacenter GPU. I'm gonna dig around there might be some misreports about the 4NP vs 4N process since both are reported as "4/5 nm" and some people ignore the details.
 
This is simply untrue as the enterprise backorders have been due to packaging not production.

We're one of the people on back order and not for the big products but for simple datacenter GPU's to accelerate our VDI infrastructure. The word from the source is that they can't make GPU dies fast enough to meet demand. TSMC plans on bringing more capacity online but that's at least eighteen months away, probably more.

I'll take my inside source info any day of the year.
 
We're one of the people on back order and not for the big products but for simple datacenter GPU's to accelerate our VDI infrastructure. The word from the source is that they can't make GPU dies fast enough to meet demand. TSMC plans on bringing more capacity online but that's at least eighteen months away, probably more.

I'll take my inside source info any day of the year.
Your source may not understand the difference. Advanced packaging, for the interposer plus HBM, is a separate task. It has been a bottleneck for well over a year. There was another plant opened to help with this, but it's not just Nvidia using the advanced packaging and so it still may not be enough.
Also the B100 and B200 (just two B100's together) are TSMC 4NP, the exact same as the rest of the 50 series. The H100 and H200 are on the TSMC 4N node, the same as the 40 series. Basically the X100 die's are the enterprise datacenter chips for the series with the X200 dies being the consumer ones for that same architecture. This is why I said that the 5090 is just a mini datacenter GPU. I'm gonna dig around there might be some misreports about the 4NP vs 4N process since both are reported as "4/5 nm" and some people ignore the details.
Again, you're wrong on the consumer Blackwell. The 50-series is not using 4NP. I was at the briefings, and it's in the Blackwell RTX whitepaper. Check pages 15, 48, 51, 53, and 56. They all say "TSMC 4nm 4N NVIDIA Custom Process" for the Blackwell RTX 50-series parts, the same as the Ada RTX 40-series.

Yes, Hopper also used 4N. Blackwell data center B100/B200 use 4NP. But Blackwell RTX GPUs are still using the exact same process node as Ada. B100 is just a downgraded variant of the B200. It's not some special chip, so there's no "B100" family and a "B200" family of chips. B100 is really just a lower power version, with otherwise identical specs. The chips are the same, it's the power delivery of the OAC module that's changed I guess.

Put another way (and this is more for others, because I'm sure you already know this), the codenames are:
GB200: Built on TSMC N4P. This is used in B100/B200 data center parts. Confusing because GB200 also overlaps the "Grace Blackwell 200" GB200 Superchip.
GB202: Built on TSMC N4. Only used (so far) in RTX 5090.
GB203: Built on TSMC N4. Used (so far) in RTX 5080, 5070 Ti, 5090 Laptop GPU, and I think the 5080 Laptop GPU as well.
GB205: Built on TSMC N4. Used (so far) in RTX 5070 Ti Laptop GPU and RTX 5070 desktop.
GB206: Built on TSMC N4. Used in RTX 5070 Laptop GPU as well as (presumably) future RTX 5060 Ti / 5060 desktop GPUs.
GB207: Built on TSMC N4. Rumored codename for the future RTX 5060 Laptop GPU, AFAIK.

All the "GB20x" names are, I think, for "Graphics Blackwell [number variant]" — this has been Nvidia's naming standard going back to the Tesla architecture in the late aughts (2009 for the GeForce 300 limited release that was, IIRC, only for mobile or OEM? and 2010 for a few GeForce 400-series parts). The two exceptions are the TU1xx family (Turing), because GT was already used for Tesla, and the AD10x family (Ada Lovelace), because GA was already used for Ampere.
 
We're one of the people on back order and not for the big products but for simple datacenter GPU's to accelerate our VDI infrastructure. The word from the source is that they can't make GPU dies fast enough to meet demand. TSMC plans on bringing more capacity online but that's at least eighteen months away, probably more.

I'll take my inside source info any day of the year.
They had to respin Blackwell which eats into their existing wafer contracts. This should only be affecting GB200 AFAIK and they're not going to buy additional wafers for consumer products. TSMC is not bringing new N5 derived capacity online outside of the AZ fab which has already started production. Throughout 2023 they had utilization problems which led them to slow roll any new fab space/expansion. They recovered in 2024 and I believe are expecting full utilization in 2025, but that doesn't mean they have insufficient capacity. If there's any volume issues it goes back to nvidia's wafer buys.
 
Again, you're wrong on the consumer Blackwell. The 50-series is not using 4NP. I was at the briefings, and it's in the Blackwell RTX whitepaper. Check pages 15, 48, 51, 53, and 56. They all say "TSMC 4nm 4N NVIDIA Custom Process" for the Blackwell RTX 50-series parts, the same as the Ada RTX 40-series.

Thanks for the paper, was some confusion awhile back.

As for my source, packaging is multiple components and two hardest to get are HBM and the NPU/GPU/AIPU/whatever-nvidia-calls-it die itself. Both are in short supply but it's the die's that usually causes the holdup. The very idea that TSMC has unused capacity at the 4nm scale is just insane.

Do people not realize the kind of gold rush that is happening right now with AI? Anything you think you know, the reality is so much worse. Our guys are being pressured to include "AI" in our products and we're a fricking nonprofit finance cooperative in the energy sector that doesn't have to worry about shareholders. If we're being pressured this much, I can't imagine how bad it must be for the for-profit companies who rely on quarterly reports to hold up stock prices. Everyone is trying to include the letters "AI" into every product and trying to fit some sort of "AI" related project into those press releases. This has caused everyone to rush to acquire "AI" computing capacity and right now nVidia is the only competitive game in town. AMD and Intel are trying but thing is, all three of them use ... drum roll please ... TSMC. Well Intel uses them for some products, next generation is anyone's guess.

For HBM it's almost all Samsung and SK Hynix, both South Korean companies with their own foundry's. Both also partner with TSMC for HBM production, so we have three companies using three separate sets of foundry's to make that memory. Samsung is the worlds second largest foundry and like TSMC are expanding fast due to the AI demand. Basically it's just easier to source HBM then it is a H100 or B100 GPU/NPU/whatever.


Yeah B200 vs GB200 vs GB202/3/4 gets very confusing. B100 and B200 are just the die's, GB200 is the package with one or more of those dies together while GB202/etc.. is the consumer product die. I'm trying to not involve the GB200 itself, just the usual H100/B100 vs consumer versions GB202/203/204. Why nVidia had to use such a whacky naming convention this time around is anyone's guess.
 
Last edited:
  • Like
Reactions: JarredWaltonGPU
Thanks for the paper, was some confusion awhile back.

As for my source, packaging is multiple components and two hardest to get are HBM and the NPU/GPU/AIPU/whatever-nvidia-calls-it die itself. Both are in short supply but it's the die's that usually causes the holdup. The very idea that TSMC has unused capacity at the 4nm scale is just insane.

Do people not realize the kind of gold rush that is happening right now with AI? Anything you think you know, the reality is so much worse. Our guys are being pressured to include "AI" in our products and we're a fricking nonprofit finance cooperative in the energy sector that doesn't have to worry about shareholders. If we're being pressured this much, I can't imagine how bad it must be for the for-profit companies who rely on quarterly reports to hold up stock prices. Everyone is trying to include the letters "AI" into every product and trying to fit some sort of "AI" related project into those press releases. This has caused everyone to rush to acquire "AI" computing capacity and right now nVidia is the only competitive game in town. AMD and Intel are trying but thing is, all three of them use ... drum roll please ... TSMC. Well Intel uses them for some products, next generation is anyone's guess.

For HBM it's almost all Samsung and SK Hynix, both South Korean companies with their own foundry's. Both also partner with TSMC for HBM production, so we have three companies using three separate sets of foundry's to make that memory. Samsung is the worlds second largest foundry and like TSMC are expanding fast due to the AI demand. Basically it's just easier to source HBM then it is a H100 or B100 GPU/NPU/whatever.
You're struggling with facts here. Nvidia doesn't use Samsung HBM, only SK Hynix. According to Nvidia themselves, Samsung HBM doesn't meet their standards.

https://www.koreaherald.com/article/10385514

As far as packaging shortages, this came directly from TSMC who I would consider a better source than yours.

https://sourceability.com/post/tsmc...ikely-through-2024-despite-expanding-capacity

The only reason we're seeing any GPU's at all produced is because packaging shortages are limiting the amount of enterprise AI accelerators that can be produced.
 
The 5070 ti should have been the 5070 ... but when there is no competition alot of $$$$ happens

nah they want to sell a 5070 ti super i set your house on fire card and duck taped i mean thermal padded to hell. and back.

dont also delude yourself even with competition amd would just sell a inflated card next to it.

until intel gets more serious i would expect the 6000 series to be the same affair. which i hope they do get serious as NVidia needs a dethroning and I don't expect amd to do it anytime soon.

intel and amd just need to be feature rich to stay completive.

i can see a battle in the 100-600 range. but i wouldn't hold my breath at the high end.
 
You have basically every proper review of this card literally SHOWING YOU how ludicrously worthless this card is, and lo' here comes TOMS HARDWARE!

This single article shows just how out of touch this site has become. I can't believe you would suggest that it's even remotely worth anything. Absolutely clueless.
 
I'm not even clicking the article based on that title. You got some of the "bad stuff" author. The 5070 is a decent performance upgrade depending where you come from but also terrible value no matter where you come from.
 
Do people not realize the kind of gold rush that is happening right now with AI? Anything you think you know, the reality is so much worse. Our guys are being pressured to include "AI" in our products and we're a fricking nonprofit finance cooperative in the energy sector that doesn't have to worry about shareholders. If we're being pressured this much, I can't imagine how bad it must be for the for-profit companies who rely on quarterly reports to hold up stock prices.
I work for a company that makes jigs for spot welding/arc welding and end-of-arm robot grippers, that last year went from private ownership to being owned by an investment firm.

Management made everyone's end of year bonuses and the next round of raises dependent on everyone finding ways to integrate AI into their area.

I wish this was a joke.