News Intel’s Arc A750, A770 Prices Revealed: Mid-Range is Back

I'm very hopeful this goes well for Intel. The disaster 40xx launch plus the EVGA exit and PSU confusion have made me very negative on nVidia for the first time. The way they lied-by-omission about how one of their 4080s is really a 4070 in disguise is pretty gross. Of course there's Radeon stuff, but if they explode in sales because of how nVidia fumbled this launch, then that opens up an opportunity for green and red to switch positions and the customer still loses.

I'm hoping Intel can introduce more pressure on both companies to do right by their customers. Honestly I'm more excited for ARC than I am for anything else in tech right now, whether or not I actually buy one of the cards.
 
While the A750/770 may look somewhat promising, I'm really rooting for the A580 to provide great bang-per-buck to fill the huge gap between the sub-par A380 and the still somewhat pricey A750.
I suspect the A580 won't be too much cheaper than the A750. We'll see, but somewhere in the $229 to $249 range seems most likely. And if cards are actually available at those prices, and performance still easily beast the RTX 3050, that would be great. Can it beat AMD's RX 6600 though? 🤔
 
If it's a performance level of a 3050 then no. But this should be a good thing for consumers especially as Nvidia is getting a bit uppity in pricing. Truth be told AMD would likely do it if they could. See socket AM5. But with 3 companies in competition hopefully they can provide value.
 
  • Like
Reactions: artk2219
I think they may mean that AMD will then raise prices and become a new nvidia. But with Intel and Nvidia both out there, I'm not sure. Nvidia for a long time had great cards and AMD was kind of there, but the Nvidia base seemed loyal. If people start to vote with their wallet because of competition then the market is more interesting.
 
I'm very hopeful this goes well for Intel. The disaster 40xx launch plus the EVGA exit and PSU confusion have made me very negative on nVidia for the first time. The way they lied-by-omission about how one of their 4080s is really a 4070 in disguise is pretty gross. Of course there's Radeon stuff, but if they explode in sales because of how nVidia fumbled this launch, then that opens up an opportunity for green and red to switch positions and the customer still loses.

I'm hoping Intel can introduce more pressure on both companies to do right by their customers. Honestly I'm more excited for ARC than I am for anything else in tech right now, whether or not I actually buy one of the cards.
Just as the 3070, and its predecessors all the way back to 670, are 560 Tis in disguise. NVIDIA's already pulled this trick once with the 600-series. Only way to combat this is to vote with our wallets.
 
Isn't that the case of every new piece of hardware?

Yes but those pieces of hardware from vendor AMD or NV have a history of support and previous generation products.

Intel has a history of flopping in this space...

So its not 1:1

i'm hoping these cards are good and they do well but I will let other people spend money to find that out.
 
  • Like
Reactions: artk2219
So it's possible to release a 406-mm2-GPU card for $330 in late 2022 without violating Moore's Law or whatever it was that Huang said. Make no mistake, the RTX 4080s (379 mm2) are in the same size-class as the A770 (406 mm2), it's just that NVIDIA wants to charge up to $1100 in that class.

While the A770 is priced nicely, it's not competitive enough to make a difference -- NVIDIA can just brand their RTX 3050 successor RTX 4060 later on and call it a day. Intel is effectively over a generation behind. I fear that the new mainstream would be based around ~200-mm2-GPU cards and consumers would have access to even less relative performance. NVIDIA is the bad guy now, but Intel and AMD would likely follow if/when they become competitive.

Update: In a funny sort of way, Intel is almost being allowed to currently exist in this space -- their current gen 406-mm2 GPU seems to fall somewhere between a 270-390 mm2 last gen GPU, i.e. it's over a generation behind. NVIDIA could release a significantly faster card for the same price or match performance for significantly less. $330 is already an aggressive price for a 406-mm2 card; not sure how much lower, and for how long, could Intel theoretically go.
 
Last edited:
So it's possible to release a 406-mm2-GPU card for $330 in late 2022 without violating Moore's Law or whatever it was that Huang said. Make no mistake, the RTX 4080s (379 mm2) are in the same size-class as the A770 (406 mm2), it's just that NVIDIA wants to charge up to $1100 in that class.

While the A770 is priced nicely, it's not competitive enough to make a difference -- NVIDIA can just brand their RTX 3050 successor RTX 4060 later on and call it a day. Intel is effectively over a generation behind. I fear that the new mainstream would be based around ~200-mm2-GPU cards and consumers would have access to even less relative performance. NVIDIA is the bad guy now, but Intel and AMD would likely follow if/when they become competitive.

Update: In a funny sort of way, Intel is almost being allowed to currently exist in this space -- their current gen 406-mm2 GPU seems to fall somewhere between a 270-390 mm2 last gen GPU, i.e. it's over a generation behind. NVIDIA could release a significantly faster card for the same price or match performance for significantly less. $330 is already an aggressive price for a 406-mm2 card; not sure how much lower, and for how long, could Intel theoretically go.
TSMC N6 ~= TSMC N7, which is what Intel is using (and AMD is using for the RDNA 3 MCH chips, to save money).
TSMC 4N ~= TSMC N5, which is what Nvidia is using for Ada. N5 wafer basically cost twice as much as N7 wafers.
Nvidia is also claiming potentially double the performance with RTX 4080 12GB compared to the A770/A750.

Could Nvidia sell AD104 for less than $900? Almost certainly. But I think $300-ish on ACM-G10 graphics cards (the GPU in Arc A700) is basically breaking even at best. Intel is will to basically take a loss on the graphics card hardware in order to gain market share. Nvidia with 80% of the market has no need to use such tactics right now. We'll see how well or poorly the RTX 40-series sells soon enough. If it sells out at launch, no one can really fault Nvidia for charging as much as it can get away with. That's the job of every corporation.
 
  • Like
Reactions: shady28
We'll see how well or poorly the RTX 40-series sells soon enough. If it sells out at launch, no one can really fault Nvidia for charging as much as it can get away with. That's the job of every corporation.
We will see if Nvidia has a "launch" or a "paper launch." I feel that just as many times before when the last generation cards have hefty stocks Nvidia will just soft launch their new generation and not really sell them in large quantities like we know they can. This will create demand for a product they are not shipping making them sold out at launch because retailers only had 2000 of each card to sell per country.
 
  • Like
Reactions: MegaXD
Intel so much needs to throw money at the driver side of things. The A380 driver reviews were horrendous. Not bad, not horrible, but horrendous!

If they can get the drivers rock solid and keep prices low-ish, they can comfortably land in the mid-range performance market. Which is great news for everyone!
 
TSMC N6 ~= TSMC N7, which is what Intel is using (and AMD is using for the RDNA 3 MCH chips, to save money).
TSMC 4N ~= TSMC N5, which is what Nvidia is using for Ada. N5 wafer basically cost twice as much as N7 wafers.
Nvidia is also claiming potentially double the performance with RTX 4080 12GB compared to the A770/A750.

Could Nvidia sell AD104 for less than $900? Almost certainly. But I think $300-ish on ACM-G10 graphics cards (the GPU in Arc A700) is basically breaking even at best. Intel is will to basically take a loss on the graphics card hardware in order to gain market share. Nvidia with 80% of the market has no need to use such tactics right now. We'll see how well or poorly the RTX 40-series sells soon enough. If it sells out at launch, no one can really fault Nvidia for charging as much as it can get away with. That's the job of every corporation.
You are, obviously, correct on all points. That being said, while readers might hold stock, this isn't a business-centered discussion, unless I'm mistaken. We are here as consumers and journalists seeking to inform, and I don't feel like we need a lesson on how capitalism works.

Sure, NVIDIA's Ada Lovelace GPUs are more expensive to produce due to a number of factors -- inflation, process, US-China relations, even memory -- however, saying that AD104/103-equipped cards could be sold for less than $900-$1100 is a bit of an understatement, seeing as how they could be sold for a lot less. To charge this amount for 295-379 mm2 cards is simply egregious, despite the aforementioned circumstances.

The performance argument has always baffled me, too. The point of a new generation of processors either serial, or parallel, is, usually, to increase performance at a similar price point, i.e. relative performance. That's normal. That's the point. If we kept increasing the price relative to performance, a video card would have to cost about a million buck by now. It's a corporation's marketing team's job to sway consumers by focusing on favorable aspects, e.g. performance, and a journalist's job to inform in a straightforward manner.

Shouldn't we, at least, inform the public that this is highly unusual, that there's no pressing reason for these prices, that RTX 4080-branded cards are equivalent to RTX 3070/3060 cards? Shouldn't we tell a little history about how this has happened before when NIVIDIA managed to essentially double its prices over a single generation by branding the successors to GTX 560 Ti/560 as GTX 680/670? And if, if we had to speculate, point out how all this is at least a little suspect in the post-PoW-Etherium world?

With all my respect and gratitude, and those are sincere, I feel like we should be having a discussion along the lines outlined above, instead of pointing toward process specifics, performance gains, and macroeconomics in a way which feels at least a little like a justification.
 
Last edited:
I suspect the A580 won't be too much cheaper than the A750. We'll see, but somewhere in the $229 to $249 range seems most likely. And if cards are actually available at those prices, and performance still easily beast the RTX 3050, that would be great. Can it beat AMD's RX 6600 though? 🤔
The A580 cannot be priced above $200 if it fails to beat the RX6600 which can often be had in the $220-250 range. Intel compares Alchemist to RTX3000 to make itself look better thanks to the grossly inflated Nvidia tax but more budget-conscious end-users will be looking at performance per dollar vs everything past, present and probable near-future.

An Intel slide from a few months ago showed the A580 slotting in a price bracket starting from $200, which I interpret as Intel being resigned to go that low if necessary. At $230, there is a real choice to be made between an RX6600 which has mostly mature drivers for nearly every situation and the A580 which is somewhat of an open-beta with no native support for anything older than DX11 even if the A580 turned out to be generally faster.
 
Given the charts we've seen of the A750 trading blows with a 3060, from a raw performance perspective I don't think the RX 6600 has a chance against the A750.

This will be especially true at 1440p with high rez textures, where memory bandwidth matters. Those top model A750/A770 ARCs have the memory bandwidth of a 3070 Ti.

Even the A580, that ones hardware looks about like a heavily OC 3060 with the memory bus / bandwidth of a 3070 Ti.

It's driver optimization that will hold them back, and that will keep on improving over time.

Because of that, I suspect these cards will age very well from a performance standpoint. From a hardware view, they are equivalent to cards that cost $100-$250 more - and eventually, maybe a year or so from now, I'd bet they will start touching the performance of those pricier models.
 
  • Like
Reactions: artk2219
I'm very hopeful this goes well for Intel. The disaster 40xx launch plus the EVGA exit and PSU confusion have made me very negative on nVidia for the first time. The way they lied-by-omission about how one of their 4080s is really a 4070 in disguise is pretty gross. Of course there's Radeon stuff, but if they explode in sales because of how nVidia fumbled this launch, then that opens up an opportunity for green and red to switch positions and the customer still loses.

I'm hoping Intel can introduce more pressure on both companies to do right by their customers. Honestly I'm more excited for ARC than I am for anything else in tech right now, whether or not I actually buy one of the cards.

nvidia react to real sales. not so much the sentiment that being talked on the internet. the thing that happen with EVGA give it a quarter or two. no one will really care about it anymore. nvidia did what they did with 40 series because they saw what happen in Q2 sales. even if sales end up tanking they will just have to adjust. better start at high price then drop latter accordingly then start with low price from the get go.

the competition from intel right now they try to compete with 30 series MSRP. the problem is nvidia is not the only competition for them. AMD already do official price cut with their RX6000 series. RX6600XT that is already better than 3060 sometimes can be had lower than $300 already. nvidia still not doing official price cut but most often the leave that to AIB themselves to adjust their pricing rather than giving them official price cut to follow.
 
You are, obviously, correct on all points. That being said, while readers might hold stock, this isn't a business-centered discussion, unless I'm mistaken. We are here as consumers and journalists seeking to inform, and I don't feel like we need a lesson on how capitalism works.

Sure, NVIDIA's Ada Lovelace GPUs are more expensive to produce due to a number of factors -- inflation, process, US-China relations, even memory -- however, saying that AD104/103-equipped cards could be sold for less than $900-$1100 is a bit of an understatement, seeing as how they could be sold for a lot less. To charge this amount for 295-379 mm2 cards is simply egregious, despite the aforementioned circumstances.

The performance argument has always baffled me, too. The point of a new generation of processors either serial, or parallel, is, usually, to increase performance at a similar price point, i.e. relative performance. That's normal. That's the point. If we kept increasing the price relative to performance, a video card would have to cost about a million buck by now. It's a corporation's marketing team's job to sway consumers by focusing on favorable aspects, e.g. performance, and a journalist's job to inform in a straightforward manner.

Shouldn't we, at least, inform the public that this is highly unusual, that there's no pressing reason for these prices, that RTX 4080-branded cards are equivalent to RTX 3070/3060 cards? Shouldn't we tell a little history about how this has happened before when NIVIDIA managed to essentially double its prices over a single generation by branding the successors to GTX 560 Ti/560 as GTX 680/670? And if, if we had to speculate, point out how all this is at least a little suspect in the post-PoW-Etherium world?

With all my respect and gratitude, and those are sincere, I feel like we should be having a discussion along the lines outlined above, instead of pointing toward process specifics, performance gains, and macroeconomics in a way which feels at least a little like a justification.
I said more or less exactly this: https://www.tomshardware.com/news/why-nvidias-4080-4090-cost-so-damn-much
------------
What was the thought process behind calling the 12GB chip a 4080 instead of a 4070, especially since it's a different chip?

Nvidia's Justin Walker, Senior Director of Product Management, said, "The 4080 12GB is a really high performance GPU. It delivers performance considerably faster than a 3080 12GB... it's faster than a 3090 Ti, and we really think it's deserving of an 80-class product."

Frankly, that's a crap answer. Of course it's faster! It's a new chip and a new architecture; it's supposed to be faster. Remember when the GTX 1070 came out and it was faster than a 980 Ti? I guess that wasn't "deserving" of an 80-class product name. Neither was the RTX 2070 when it matched the 1080 Ti, or the 3070 when it matched the 2080 Ti.
------------
Here's the things we don't know:
  1. How much has Nvidia spent on R&D just for the past two years that could be directly pointed at Ada?
  2. What is the real BOM for a 40-series card?
If we had those two things, we could make better informed estimates of how much Nvidia could charge on RTX 4080 / 4090 and still make a modest profit. But of course Nvidia would never reveal those factors. What we do know is that Nvidia is a large corporation with thousands of employees working on graphics cards. With a lot of them in California, we could reasonably estimate that they're all earning six figures minimum on the engineering and software side.

7,500 people at $100K each would be $750 million. $100K might be low, so let's call that a billion. (Realistically, only half are in gaming directly, but probably average salaries are way higher than $100K.) Nvidia grossed $26 billion in 2021, of which $12.1 billion was from the gaming sector. $1 billion in salaries could easily be covered... but then buildings, manufacturing, equipment, etc. all adds up.

I'm not saying Nvidia didn't make a lot of money in 2021. Obviously it did. But hypothetically let's say it sold 15 million GPUs in 2021 at an average price of $810. That would be $12.1 billion. Even if we go with a BOM of $600 (very generous toward Nvidia not being greedy), that would be $9 billion and leave $3.1 billion for other stuff. $1 billion for employee salaries drops that to $2.1 billion. Buildings and equipment could easily be $1 billion, maybe more. You eventually get down to the point where Nvidia makes billions and spends billions.

Nvidia also has 30-series inventory to "protect." Whether you like that approach or not, it's a business decision. When you have excess inventory on hand, launching new, faster GPUs at a price that would undercut them would rapidly cut into profits. Maybe Nvidia will eventually need to drop prices, but pre-emptively doing so? It was never going to happen. It's in the lead, it will charge a lot. That's a key reason for these prices.

Don't like them? Don't pay for them! That's the only thing anyone really can do. But having seen the past couple of decades, I can absolutely guarantee there will be people lining up to pay $900 or more for the 4080, and $1600 or more for the 4090. Until or unless that doesn't happen, we're not going to see prices come down.

Intel meanwhile has spent billions on Arc Graphics, and so far has lost nearly everything it put into the division. Arc A770 even at $349 is not going to make massive profits; it's barely going to break even on the hardware cost. But Intel can't charge more if it doesn't offer a superior product. All indication are that the best Arc can do is maybe to match RTX 3060 Ti, and often not even that, plus you still have driver concerns. Intel has a sunk cost now so it's recovering as much of it as possible. Hopefully it can also stay in the GPU business and deliver better products in the future — Intel has the ability to lose money in GPUs while gaining market share in the short term, and that seems to be the plan right now.

TL;DR: Nvidia absolutely can sell RTX 3060 at under $300 if it feels the need. It could likely sell RTX 4080 12GB for $500 at a profit, just not a very large one. That would in turn kill sales of existing 30-series. So prices will be set as high as Nvidia wants and will stay there until or unless there's competition and/or lack of demand to force prices down. AMD might do that, Intel is trying to do that. We'll see how things play out in the coming months. Feigning anger on behalf of gamers, though? I'm not going to do it, because the justification for Nvidia's actions is... justified. Not by me, but by the markets of the world. Again, if you don't like it and don't like Nvidia, I'm totally fine with that and it won't bother me in the least if people don't pay the high prices Nvidia is asking.
 
I said more or less exactly this: https://www.tomshardware.com/news/why-nvidias-4080-4090-cost-so-damn-much
------------
What was the thought process behind calling the 12GB chip a 4080 instead of a 4070, especially since it's a different chip?

Nvidia's Justin Walker, Senior Director of Product Management, said, "The 4080 12GB is a really high performance GPU. It delivers performance considerably faster than a 3080 12GB... it's faster than a 3090 Ti, and we really think it's deserving of an 80-class product."

Frankly, that's a crap answer. Of course it's faster! It's a new chip and a new architecture; it's supposed to be faster. Remember when the GTX 1070 came out and it was faster than a 980 Ti? I guess that wasn't "deserving" of an 80-class product name. Neither was the RTX 2070 when it matched the 1080 Ti, or the 3070 when it matched the 2080 Ti.
------------
Here's the things we don't know:
  1. How much has Nvidia spent on R&D just for the past two years that could be directly pointed at Ada?
  2. What is the real BOM for a 40-series card?
If we had those two things, we could make better informed estimates of how much Nvidia could charge on RTX 4080 / 4090 and still make a modest profit. But of course Nvidia would never reveal those factors. What we do know is that Nvidia is a large corporation with thousands of employees working on graphics cards. With a lot of them in California, we could reasonably estimate that they're all earning six figures minimum on the engineering and software side.

7,500 people at $100K each would be $750 million. $100K might be low, so let's call that a billion. (Realistically, only half are in gaming directly, but probably average salaries are way higher than $100K.) Nvidia grossed $26 billion in 2021, of which $12.1 billion was from the gaming sector. $1 billion in salaries could easily be covered... but then buildings, manufacturing, equipment, etc. all adds up.

I'm not saying Nvidia didn't make a lot of money in 2021. Obviously it did. But hypothetically let's say it sold 15 million GPUs in 2021 at an average price of $810. That would be $12.1 billion. Even if we go with a BOM of $600 (very generous toward Nvidia not being greedy), that would be $9 billion and leave $3.1 billion for other stuff. $1 billion for employee salaries drops that to $2.1 billion. Buildings and equipment could easily be $1 billion, maybe more. You eventually get down to the point where Nvidia makes billions and spends billions.

Nvidia also has 30-series inventory to "protect." Whether you like that approach or not, it's a business decision. When you have excess inventory on hand, launching new, faster GPUs at a price that would undercut them would rapidly cut into profits. Maybe Nvidia will eventually need to drop prices, but pre-emptively doing so? It was never going to happen. It's in the lead, it will charge a lot. That's a key reason for these prices.

Don't like them? Don't pay for them! That's the only thing anyone really can do. But having seen the past couple of decades, I can absolutely guarantee there will be people lining up to pay $900 or more for the 4080, and $1600 or more for the 4090. Until or unless that doesn't happen, we're not going to see prices come down.

Intel meanwhile has spent billions on Arc Graphics, and so far has lost nearly everything it put into the division. Arc A770 even at $349 is not going to make massive profits; it's barely going to break even on the hardware cost. But Intel can't charge more if it doesn't offer a superior product. All indication are that the best Arc can do is maybe to match RTX 3060 Ti, and often not even that, plus you still have driver concerns. Intel has a sunk cost now so it's recovering as much of it as possible. Hopefully it can also stay in the GPU business and deliver better products in the future — Intel has the ability to lose money in GPUs while gaining market share in the short term, and that seems to be the plan right now.

TL;DR: Nvidia absolutely can sell RTX 3060 at under $300 if it feels the need. It could likely sell RTX 4080 12GB for $500 at a profit, just not a very large one. That would in turn kill sales of existing 30-series. So prices will be set as high as Nvidia wants and will stay there until or unless there's competition and/or lack of demand to force prices down. AMD might do that, Intel is trying to do that. We'll see how things play out in the coming months. Feigning anger on behalf of gamers, though? I'm not going to do it, because the justification for Nvidia's actions is... justified. Not by me, but by the markets of the world. Again, if you don't like it and don't like Nvidia, I'm totally fine with that and it won't bother me in the least if people don't pay the high prices Nvidia is asking.
Thank you for taking the time to respond.

Again, you’re (mostly) correct. My original point stands, though – understanding why a corporation does what it believes is best for itself, doesn’t mean it shouldn’t be criticized. Criticism is not feigning anger; neither is providing information and context. There’s two sides to every coin. My understanding is that this is a consumer-centered outlet and by extension, forums. Again, correct me if I’m wrong.

Just before I make my next point, I must say that while it’s impossible for us to know NVIDIA’s costs, as you’ve pointed out, we can make educated guesses; from what I know which, full disclosure, is far from exhaustive, your BOM assumptions are quite favorable, as is the assumption the RTX 4080 12GB couldn’t do sustainable business at $500.

It’s likely we might be witnessing the next step in customers having less access to relative performance, i.e. even less bang for your buck (accounted for inflation, added costs, generational leaps, etc.). The thing is this pricing policy isn’t primarily about setting a temporary price due to some circumstance, but about an attempt to set a new precedent. As you’ve alluded, this is often successful. Once the precedent has been set, NVIDIA wouldn’t back down on pricing, even when circumstances have changed, and neither would Intel and AMD (when they become competitive).

The precedent we’re currently seeing is that the successor to the RTX 3060 is branded RTX 4080 (12GB) and costs $900, and the successor to the RTX 3060 Ti/3070 is also branded RTX 4080 (16GB) and costs $1200. Logically, the new mainstream offering could very likely be the successor to the RTX 3050 (GA107), or a card based on a relatively small GPU, despite how NVIDIA chooses to brand it. AMD and Intel have little reason to oppose that if it works, on the contrary – they too would likely love to jump on that wagon; historical data supports this. The narrative is already centered around the RTX 3060s’ performance, and not size-class, being a sort of standardized mainstream. Intel’s first flagship lands there too. That’s gotta be plenty of performance, right? Even less can be quite nice and usable. We haven't been able to buy cards at MSRP for a long time anyway. That sort of narrative gives manufacturers a lot of ammo.

If this precedent is set, the bold new world might be one of the ~$400-500 “mainstream” card based on a small GPU, with many customers being priced out of mid-sized GPUs, and the big silicon being relegated to unobtainium. It’s a special kind of dystopia when technology progresses whilst becoming less and less accessible.
 
Last edited:
  • Like
Reactions: helper800