Review Intel Arc A380 Review: Great for Video, Weak for Gaming

cyrusfox

Distinguished
Thanks for putting up an review on this. I really am looking for Adobe Suite performance, Photoshop and lightroom. IMy experience is even with a top of the line CPU (12900k) it chugs throuhg some GPU heavy task and was hoping ARC might already be optimized for that.
 

Giroro

Splendid
What settings were used for the CPU comparison encodes? I would think that the CPU encode should always be able to provide the highest quality, but possibly with unacceptable performance.
I'm also having a hard time reading the charts. Is the GTX 1650 the dashed hollow blue line, or the solid hollow blue line?
A good encoder at the lowest price is not a bad option for me to have. Although, I don't have much faith that Intel will get their drivers in a good enough state before the next generation of GPUs.
 
What settings were used for the CPU comparison encodes? I would think that the CPU encode should always be able to provide the highest quality, but possibly with unacceptable performance.
I'm also having a hard time reading the charts. Is the GTX 1650 the dashed hollow blue line, or the solid hollow blue line?
A good encoder at the lowest price is not a bad option for me to have. Although, I don't have much faith that Intel will get their drivers in a good enough state before the next generation of GPUs.
Are you viewing on a phone or a PC? Because I know our mobile experience can be... lacking, especially for data dense charts. On PC, you can click the arrow in the bottom-right to get the full-size charts, or at least get a larger view which you can then click the "view original" option in the bottom-right. Here are the four line charts, in full resolution, if that helps:

https://cdn.mos.cms.futurecdn.net/dVSjCCgGHPoBrgScHU36vM.png
https://cdn.mos.cms.futurecdn.net/d2zv239egLP9dwfKPSDh5N.png

The GTX 1650 is a hollow dark blue dashed line. The AMD GPU is the hollow solid line, CPU is dots, A380 is solid filled line, and Nvidia RTX 3090 Ti (or really, Turing encoder) is solid dashes. I had to switch to dashes and dots and such because the colors (for 12 lines in one chart) were also difficult to distinguish from each other, and I included the tables of the raw data just to help clarify what the various scores were if the lines still weren't entirely sensible. LOL

As for the CPU encoding, it was done with the same constraints as the GPU: single pass and the specified bitrate, which is generally how you would set things up for streaming (AFAIK, because I'm not really a streamer). 2-pass encoding can greatly improve quality, but of course it takes about twice as long and can't be done with livestreaming. I did not look into other options that might improve the quality at the cost of CPU encoding time, and I also didn't look if there were other options that could improve the GPU encoding quality.
Thanks for putting up an review on this. I really am looking for Adobe Suite performance, Photoshop and lightroom. IMy experience is even with a top of the line CPU (12900k) it chugs throuhg some GPU heavy task and was hoping ARC might already be optimized for that.
I suspect Arc won't help much at all with Photoshop or Lightroom compared to whatever GPU you're currently using (unless you're using integrated graphics I suppose). Adobe's CC apps have GPU accelerated functions for certain tasks, but complex stuff still chugs pretty badly in my experience. If you want to export to AV1, though, I think there's a way to get that into Premiere Pro and the Arc could greatly increase the encoding speed.
 

magbarn

Reputable
Dec 9, 2020
159
157
4,770
Wow, 50% larger die size (much more expensive for Intel vs. AMD) and performs much worse than the 6500XT. Stick a fork in Arc, it's done.
 
Wow, 50% larger die size (much more expensive for Intel vs. AMD) and performs much worse than the 6500XT. Stick a fork in Arc, it's done.
Much of the die size probably gets taken up by XMX cores, QuickSync, DisplayPort 2.0, etc. But yeah, it doesn't seem particularly small considering the performance. I can't help but think with fully optimized drivers, performance could improve another 25%, but who knows if we'll ever get such drivers?
 
  • Like
Reactions: Memnarchon

waltc3

Honorable
Aug 4, 2019
453
252
11,060
Considering what you had to work with, I thought this was a decent GPU review. Just a few points that occurred to me while reading...

I wouldn't be surprised to see Intel once again take its marbles and go home and pull the ARCs altogether, as Intel did decades back with its ill-fated acquisition of Real3D. They are probably hoping to push it at a loss at retail to get some of their money back, but I think they will be disappointed when that doesn't happen. As far as another competitor in the GPU markets goes, yes, having a solid competitor come in would be a good thing, indeed, but only if the product meant to compete actually competes. This one does not. ATi/AMD have decades of experience in the designing and manufacturing of GPUs, as does nVidia, and in the software they require, and the thought that Intel could immediately equal either company's products enough to compete--even after five years of R&D on ARC--doesn't seem particularly sound, to me. So I'm not surprised, as it's exactly what I thought it would amount to.

I wondered why you didn't test with an AMD CPU...was that a condition set by Intel for the review? Not that it matters, but It seems silly, and I wonder if it would have made a difference of some kind. I thought the review was fine as far it goes, but one thing that I felt was unnecessarily confusing was the comparison of the A380 in "ray tracing" with much more expensive nVidia solutions. You started off restricting the A380 to the 1650/Super, which doesn't ray trace at all, and the entry level AMD GPUs which do (but not to any desirable degree, imo)--which was fine as they are very closely priced. But then you went off on a tangent with 3060's 3050's, 2080's, etc. because of "ray tracing"--which I cannot believe the A380 is any good at doing at all.

The only thing I can say that might be a little illuminating is that Intel can call its cores and rt hardware whatever it wants to call them, but what matters is the image quality and the performance at the end of the day. I think Intel used the term "tensor core" to make it appear to be using "tensor cores" like those in the RTX 2000/3000 series, when they are not the identical tensor cores at all...;) I was glad to see the notation because it demonstrates that anyone can make his own "tensor core" as "tensor" is just math. I do appreciate Intel doing this because it draws attention to the fact that "tensor cores" are not unique to nVidia, and that anyone can make them, actually--and call them anything they want--like for instance "raytrace cores"...;)
 
I wouldn't be surprised to see Intel once again take its marbles and go home and pull the ARCs altogether, as Intel did decades back with its ill-fated acquisition of Real3D. They are probably hoping to push it at a loss at retail to get some of their money back, but I think they will be disappointed when that doesn't happen. As far as another competitor in the GPU markets goes, yes, having a solid competitor come in would be a good thing, indeed, but only if the product meant to compete actually competes. This one does not. ATi/AMD have decades of experience in the designing and manufacturing of GPUs, as does nVidia, and in the software they require, and the thought that Intel could immediately equal either company's products enough to compete--even after five years of R&D on ARC--doesn't seem particularly sound, to me. So I'm not surprised, as it's exactly what I thought it would amount to.
Intel seems committed to doing dedicated GPUs, and it makes sense. The data center and supercomputer markets all basically use GPU-like hardware. Battlemage is supposedly well underway in development, and if Intel can iterate and get the cards out next year, with better drivers, things could get a lot more interesting. It might lose billions on Arc Alchemist, but if it can pave the way for future GPUs that end up in supercomputers in five years, that will ultimately be a big win for Intel. It could have tried to make something less GPU-like and just gone for straight compute, but then porting existing GPU programs to the design would have been more difficult, and Intel might actually (maybe) think graphics is becoming important.
I wondered why you didn't test with an AMD CPU...was that a condition set by Intel for the review? Not that it matters, but It seems silly, and I wonder if it would have made a difference of some kind. I thought the review was fine as far it goes, but one thing that I felt was unnecessarily confusing was the comparison of the A380 in "ray tracing" with much more expensive nVidia solutions. You started off restricting the A380 to the 1650/Super, which doesn't ray trace at all, and the entry level AMD GPUs which do (but not to any desirable degree, imo)--which was fine as they are very closely priced. But then you went off on a tangent with 3060's 3050's, 2080's, etc. because of "ray tracing"--which I cannot believe the A380 is any good at doing at all.
Intel set no conditions on the review. We purchased this card, via a go-between, from China — for WAY more than the card is worth, and then it took nearly two months to get things sorted out and have the card arrive. That sucked. If you read the ray tracing section, you'll see why I did the comparison. It's not great, but it matches an RX 6500 XT and perhaps indicates Intel's RTUs are better than AMD's Ray Accelerators, and maybe even better than Nvidia's Ampere RT cores — except Nvidia has a lot more RT cores than Arc has RTUs. I restricted testing to cards priced similarly, plus the next step up, which is why the RTX 2060/3050 and RX 6600 are included.
The only thing I can say that might be a little illuminating is that Intel can call its cores and rt hardware whatever it wants to call them, but what matters is the image quality and the performance at the end of the day. I think Intel used the term "tensor core" to make it appear to be using "tensor cores" like those in the RTX 2000/3000 series, when they are not the identical tensor cores at all...;) I was glad to see the notation because it demonstrates that anyone can make his own "tensor core" as "tensor" is just math. I do appreciate Intel doing this because it draws attention to the fact that "tensor cores" are not unique to nVidia, and that anyone can make them, actually--and call them anything they want--like for instance "raytrace cores"...;)
Tensor cores refer to a specific type of hardware matrix unit. Google has TPUs, various other companies are also making tensor core-like hardware. Tensorflow is a popular tool for AI workloads, which is why the "tensor cores" name came into being AFAIK. Intel calls them Xe Matrix Engines, but the same principles apply: lots of matrix math, focusing especially on multiply and accumulate as that's what AI training tends to use. But tensor cores have literally nothing to do with "raytrace cores," which need to take DirectX Raytracing structures (or VulkanRT) to be at all useful.
 

escksu

Reputable
BANNED
Aug 8, 2019
877
353
5,260
The ray tracing shows good promise. The video encoder is the best. 3d performance is meh but still good enough for light gaming.

If it's retails price is indeed what it shows, then I believe it will sell. Of course, Intel won't make much (if any) from these cards.
 

InvalidError

Titan
Moderator
Wow, 50% larger die size (much more expensive for Intel vs. AMD) and performs much worse than the 6500XT. Stick a fork in Arc, it's done.
If the RX6500 had 4x display out instead of only 2x, 96bits memory controller instead of 64bits, 4.0x8 PCIe instead of 4.0x4, full media encode+decode engines for most newest major CODECs instead of only decode of older formats, had matrix/tensor cores, etc., it would probably land at a similar size.

Intel's biggest problem continues to be sub-par drivers, assuming the sub-parness isn't being caused by hardware quirks holding things back.
 

rluker5

Distinguished
Jun 23, 2014
907
587
19,760
I have an Asrock Challenger and it plays mostly the same as an Nvidia or AMD card that can do the same fps. There are far more similarities than differences, as the games that work will generally seem the same. I haven't tested that many, but all have worked to the card's meager abilities. I also generally have instability over 25% slider and don't have fan control, but don't need it much either. I also don't much like how the Arc Control overlays. I also don't like the startup notification, the AMD style rough oc failure and did have the usual Intel issue with multistream displayport on my Panasonic TC-58AX800u where the halves don't sync, but HDMI 2.0 works 100%. And the driver's graphics control features are extremely basic on the gaming side right now. But the display features in Intel Graphics Command are quite nice.

But I can see these cards being worth an even price to performance ratio as something from AMD or Nvidia in the near future. Even now these new Intel ones aren't that different. Windows goes with some basic display driver until you install yours then you play the game. But I do have rebar, pcie4 so I'm sure that helps. To be honest the A380 is more boring than I was expecting. Seemed like a bigger change going from a 3080 to a 6800 than going from both of those to an Arc. Arc kind of has that better pixels thing that AMD has over Nvidia and will hopefully start to leverage it's better integrated RT and XMX to stand out a bit, and of course get faster. It would be more fun with XESS and some more driver level enhancement features.

I also like that the aluminum block on the Asrock looks so much like the block on a stock Intel cooler :p That card has no extravagances. Even the plastic cover has extra screwholes like they grabbed some general stock thing off a shelf.
 
More than pure performance numbers on everything I like you including encoding tests after so long of GPU reviews turning a blind eye to them, but I do have a couple annoyances:

1. Not all software will use the encoding pipeline the same way and I would only trust professional or very popular software to do a decent enough job. Particularly, OBS can do everything and if you want my advise, use OBS (they already have nightly builds for both AMD improvements and Intel's ARC cards EDIT: they haven't added ARC support yet, so I correct myself here). Plus, it allows you to use QuickSync, because even if ARC does offer encoding, do not forget Intel has, for a VERY long time, offered QS with hardware encoding.

2. No details of settings exposed and used. This is specially important for CPU encoding. I can make H264 produce the most beautiful encoding by sacrificing FPS'es or get zero FPS drops for an acceptable quality. I've already run extensive testing on this and I've shared some results in the Discord (if you're interested, I can share my files as well). Showing the settings is SUPER important, so please do share them next time.

Overall, this doesn't change my view on ARC. The frog is boiling.

All in all, I'm happy for the review in any case. Thanks a lot!

Regards.
 
Last edited:

waltc3

Honorable
Aug 4, 2019
453
252
11,060
Intel seems committed to doing dedicated GPUs, and it makes sense. The data center and supercomputer markets all basically use GPU-like hardware. Battlemage is supposedly well underway in development, and if Intel can iterate and get the cards out next year, with better drivers, things could get a lot more interesting. It might lose billions on Arc Alchemist, but if it can pave the way for future GPUs that end up in supercomputers in five years, that will ultimately be a big win for Intel. It could have tried to make something less GPU-like and just gone for straight compute, but then porting existing GPU programs to the design would have been more difficult, and Intel might actually (maybe) think graphics is becoming important.

Thanks! I appreciate the response. I don't think Intel can afford to "lose billions" at this point. In five more years, AMD and nVidia will be five years ahead of where they are now. I'm skeptical of Intel's ability to catch up in another five years, as the other companies have multi-decade leads in R&D and manufacturing already. Intel has a lot going on with just its FABs, and as you point out, the launch of ARC was pretty terrible, and still is, actually...;) Obviously, Intel cannot do everything and is not a bottomless pit of cash-- despite the myth, imo. It's monopoly on high-end x86 CPUs is over and done, which is a solid point worth mentioning.

Intel set no conditions on the review. We purchased this card, via a go-between, from China — for WAY more than the card is worth, and then it took nearly two months to get things sorted out and have the card arrive. That sucked. If you read the ray tracing section, you'll see why I did the comparison. It's not great, but it matches an RX 6500 XT and perhaps indicates Intel's RTUs are better than AMD's Ray Accelerators, and maybe even better than Nvidia's Ampere RT cores — except Nvidia has a lot more RT cores than Arc has RTUs. I restricted testing to cards priced similarly, plus the next step up, which is why the RTX 2060/3050 and RX 6600 are included.

Well, I can only imagine that Intel hasn't released in the West because it doesn't consider Arc would do well in the Western markets. It's hard to imagine it doing well in China, frankly.

Tensor cores refer to a specific type of hardware matrix unit. Google has TPUs, various other companies are also making tensor core-like hardware. Tensorflow is a popular tool for AI workloads, which is why the "tensor cores" name came into being AFAIK. Intel calls them Xe Matrix Engines, but the same principles apply: lots of matrix math, focusing especially on multiply and accumulate as that's what AI training tends to use. But tensor cores have literally nothing to do with "raytrace cores," which need to take DirectX Raytracing structures (or VulkanRT) to be at all useful.

I only brought it up after having conversations with people in other forums who think nVidia's tensor cores are trademarked and that only nVidia can make and use "tensor cores"--so I think it astute that you have mentioned Intel's use of it own tensor cores here. nVidia marketed that well, as a tensor core can be any core which does tensor math and be in any product. My point about AMD's "ray trace" cores was only to illustrate the marketing--nVidia does ray tracing but does not use "ray trace cores" at all, whereas AMD specifically markets them that way. I think you made a good point about that.
 

InvalidError

Titan
Moderator
Well, I can only imagine that Intel hasn't released in the West because it doesn't consider Arc would do well in the Western markets. It's hard to imagine it doing well in China, frankly.
The ASRock Challenger A380 went up for sale in the USA on August 23, August 26 in Canada. I doubt it would do this without Intel's blessing, so I'd guess Intel effectively stealth-launched it and ASRock simply happened to be the only international AIB with stocks ready to ship.

IIRC, there was uncertainty between AIBs and Intel about whether Alchemist would end up getting scrapped in favor of skipping to Battlemage and I hypothesized that a silicon revision could also be on the way to fix nagging hardware quirks causing driver development headaches. Either option would explain most of the usual suspects not having a hoard of boards ready to ship.
 

perseco

Reputable
Sep 23, 2020
5
0
4,510
So basically this is the card to stick in your home lab server that handles video encoding for your media. And it's reasonably priced for that purpose? Cool. Anyone that honestly expected this to be a competitive gaming GPU set their expectations way too high. That won't be coming until 2nd or 3rd gen, assuming Intel continues the project.
 

InvalidError

Titan
Moderator
Anyone that honestly expected this to be a competitive gaming GPU set their expectations way too high.
For $140, the A380's performance is mostly ok when compared to other similarly priced options and you get considerably more features per dollar.

I'd buy an A380 for $140 if Intel sorted out its compatibility, consistency and performance issues.
 
So basically this is the card to stick in your home lab server that handles video encoding for your media. And it's reasonably priced for that purpose? Cool. Anyone that honestly expected this to be a competitive gaming GPU set their expectations way too high. That won't be coming until 2nd or 3rd gen, assuming Intel continues the project.
That's a possibility, yes, but...

  1. AV1 support is far from ubiquitous
  2. Nvidia and AMD do fine with HEVC if you want to go that route
  3. Setting up encoding software to automatically use the A380 encoding may or may not be simple (depending on the software you use for your media server)
  4. There are still potential issues with drivers, even if you're not gaming

I really do hope Intel can launch Battlemage next year in a reasonable amount of time. By next summer would be great. Later than that, probably not so much.
 

InvalidError

Titan
Moderator
I really do hope Intel can launch Battlemage next year in a reasonable amount of time.
From a budget shopper point of view, it doesn't matter when it launches as long as the performance and features are still competitive for the price by the time it launches. The A380 is quite inconsistent on performance ranging from great to horrible for the price, features look great on paper though a few have been 'delayed' to focus on more urgent issues, seems to be largely an issue of drivers failing to deliver.
 
From a budget shopper point of view, it doesn't matter when it launches as long as the performance and features are still competitive for the price by the time it launches. The A380 is quite inconsistent on performance ranging from great to horrible for the price, features look great on paper though a few have been 'delayed' to focus on more urgent issues, seems to be largely an issue of drivers failing to deliver.
Budget isn't the focus of really any GPU company. They might make budget GPUs, but the margins are so much smaller there. My quick and dirty (and not entirely accurate) rule of thumb is that a graphics card needs to cost more than the square mm of the die size to be even close to a decent money maker.

With the ACM-G11 in Arc A380 measuring 157 mm^2, ideally it should sell for $160 to be a modest money maker, and more is better (for the manufacturer and Intel). It's not going to hit that price, but the bigger issue is going to be ACM-G10 that measures 406mm^2. Again, ballpark figure for TSMC N6 is that it needs to sell for $400 or more, but even if performance is better than RTX 3060, will the Arc A750 and Arc A770 sell for that much? I suspect not, which means margins are even slimmer.

Estimated die size for ACM-G10 is 25.0x16.3mm. Intel can get at most about 140 chips out of a single 300mm wafer, some of which will be lost due to defects — though many of those will simply end up in A580 or A730 instead of A770 (or the mobile equivalent). TSMC charges somewhere around $10,000 per wafer for N7, and N6 is probably similar. So just pure chip cost and nothing else would be $75. Packaging and bonding come into play, meaning the chips probably cost Intel closer to $100. Then add in memory, PCB, cooler, and all the other bits and pieces and you can see how quickly the bill of materials cost escalates.

I'm sure it's less than $200 for everything, but the point is that Intel needs to make money, and their AIB partners need to make money, and the distributors need to make money, and the retail stores need to make money. 15% profit margin (which isn't really profit because employees need to be paid, shipping, etc.) would mean a $400 A770 would cost the retail outlet $340, distributors $289, AIB partner $245, and Intel $210 (give or take). So there's some margin but not a lot if the BOM is anywhere close to $200.

What about the ACM-G11 chips and cards? The die measures about 12.9x12.2mm, and Intel could get around 375 per wafer. Cost of just the chip would be $27 or so, but packaging and all the other bits and pieces add to that. We know Intel has set a base MSRP of $140 on the A380. Again using 15% margins: retail pays $119, distributor $101, AIB $86, and Intel would need to sell for at least $73. I'd guess the margins per A380 are well below the $25 mark for chip for Intel, which is why every GPU company is far more interested in higher cost products.
 

InvalidError

Titan
Moderator
I'd guess the margins per A380 are well below the $25 mark for chip for Intel, which is why every GPU company is far more interested in higher cost products.
That is the the manufacturer's problem, not the shopper's.

In a market with healthy competition, manufacturers cannot expect to make net margins much beyond 15% before competition starts flooding in and drag net margins down to 10% or less. Investors may not be happy with it but it is still viable. No doubt that GPUs getting 20+% net margins now played a significant role in Imagination contemplating a return to PC graphics.
 
That is the the manufacturer's problem, not the shopper's.

In a market with healthy competition, manufacturers cannot expect to make net margins much beyond 15% before competition starts flooding in and drag net margins down to 10% or less. Investors may not be happy with it but it is still viable. No doubt that GPUs getting 20+% net margins now played a significant role in Imagination contemplating a return to PC graphics.
Intel's margins on CPUs have traditionally been in the 60% and higher range. Obviously, that's part of how Intel got so big, but Intel also sold off its SSD division because the margins weren't high enough on the consumer market. I mean, when you think about it, there's a lot more stuff going into GPUs than a CPU, and yet often their prices aren't all that dissimilar. Well, except in the extreme performance range.

What I'm getting at is that Intel isn't trying to break into the GPU arena just to make 10-15%. GPU companies were probably doing more like 100% or higher margins the past two years, which was also unsustainable. Anyway, A380 is more a test vehicle and a way to prep drivers for the bigger chips IMO. Nvidia and AMD almost always launch high-end first, but Intel knew it wasn't ready with drivers and A380 helps get them a lot closer. Everyone buying and testing A380 and submitting bugs is basically doing all the Q&A for Intel.
 

InvalidError

Titan
Moderator
Intel's margins on CPUs have traditionally been in the 60% and higher range.
That is gross margin, not net. Intel's overall net margin is ~25% on ~45% gross which went down to 35% in Q2. Also in Q2, Intel had 1.1G$ in income from its CCG on 7.6G$ of revenue, which is about 14% net on desktop/laptop CPUs. Either Intel is having some tough sales or it is doing some creative accounting to hide losses from Foundry, Graphics and "All others".

What I'm getting at is that Intel isn't trying to break into the GPU arena just to make 10-15%.
And the reality is nobody will give Intel the proverbial GPU time of day until it has proven that it can actually deliver for 2-3 consecutive generations, which it has miserably failed to with Alchemist. Until it successfully delivers or we get another crypto boom or another COVID PC-building frenzy, it will be stuck being the ultra-budget option like AMD's RX400-500 and Ryzen 1000-3000.
 
Feb 26, 2022
5
0
10
Budget isn't the focus of really any GPU company.

It doesn't matter what the focus of the GPU market. Since the pandemic started GPUs have been outrageously overpriced and were high before that.
Nvidia caused the shortage and has been laughing all the way to the bank.

This card Breaks the monopoly and I can guarantee Intel is not losing any money at all at at this price point nor would, for that matter, any other GPU maker.