AMD Trinity On The Desktop: A10, A8, And A6 Get Benchmarked!

Page 7 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
[citation][nom]JimmyD1964[/nom]I have seen some benchmarks where the new APU's from AMD have been in a Hybrid Cross fire mode and it increased Grafics performance by almost 40%. It depends upon which card you do it with. I think they were using a 6670 or 6770 and it increased Frame rates from 90fps to 140fps on the game they were testing. Pretty impressive.[/citation]

6670, the 6770 is not supported and would probably overwhelm any APU because it is too much faster than the APUs. From the looks of those numbers, the APU+discrete met a CPU bottle-neck because I'm pretty sure that the 6670 is not weaker than even an A10 with 1600MHz dual channel memory (although 1866MHz and especially 2133MHz would probably change that).
 
G

Guest

Guest
AMD should provide a LIST of games that scales well on Hybrid-Xfire, other that that, it's just purely false marketing from them, and I can't believe I got bummed with their 'As Advertised' Quad-Core FX4100

"A marked increase of up to 30- 40 percent increased performance in games such as...." Well if it's just quite a few select (say, 5 percent of all the available titles in the market ) then No thank You AMD, I'm better - off with an i3 and a discrete graphics

 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
[citation][nom]tourist[/nom]I was really looking forward to using a gcn based card with trinity, That would have been a a nice upgrade over Llano.[/citation]

But unfortunately, as I remember reading, since their latest GPU architectures don't get completed as they make their latest APU, they're stuck with integrating the last GPU architecture they had. I hope I said that right.

Anyway, maybe with good enough management or expertise, they could make enough time to integrate their latest GPU architecture into their latest APU after finishing development of the former, or maybe even develop them side by side somehow. Sounds feasible, but I wouldn't know.

I had a thought that they may not want to use GCN on Trinity for the fear of it potentially killing off their entry-level GPU market for their latest GCN cards. Sounds stupid though, but it's a possibility. I say it sounds stupid since this could give them more of an entry-level CPU+GPU market. Though, I'm not sure what their market/financial analysts or whatever raked up in the case they made this decision.
 
[citation][nom]tourist[/nom]I was really looking forward to using a gcn based card with trinity, That would have been a a nice upgrade over Llano.[/citation]

Oh, that would have ben a perfect successor to Llano. It could have far superior compute performance, letting it really work right in GPGPU hardware acceleration and it would have let the CPU cores have more thermal headroom if AMD stuck with similar IGP gaming performance. Heck, if AMD had Trinity be a 28nm die shrink like they should have, then there would have also been room to alleviate the memory bandwidth bottle-neck with a good caching system for the IGP, maybe even adding more memory controllers since AMD made a new socket anyway. This would have so much higher performance than Trinity (both CPU and GPU, CPU through higher headroom allowing higher clocks and GPU through the cache and/or extra memory controllers) while using no more power than Trinity (maybe even less) that it would have truly been a family worthy of succeeding Llano.

[citation][nom]army_ant7[/nom]But unfortunately, as I remember reading, since their latest GPU architectures don't get completed as they make their latest APU, they're stuck with integrating the last GPU architecture they had. I hope I said that right.Anyway, maybe with good enough management or expertise, they could make enough time to integrate their latest GPU architecture into their latest APU after finishing development of the former, or maybe even develop them side by side somehow. Sounds feasible, but I wouldn't know.I had a thought that they may not want to use GCN on Trinity for the fear of it potentially killing off their entry-level GPU market for their latest GCN cards. Sounds stupid though, but it's a possibility. I say it sounds stupid since this could give them more of an entry-level CPU+GPU market. Though, I'm not sure what their market/financial analysts or whatever raked up in the case they made this decision.[/citation]

No need to kill off the weakest GCN cards, the 7750 and 7770, with a GCN APU. Just throw in something that is on-par with the 800MHz reference 7750. With the 7750 now having the 900MHz reference, AMD still has similar official cards stronger than any such theoretical APU and could have Cape Verde cards be compatible with this APU for CF and keep backwards-compatibility with the low end and entry-level VLIW5 GPU-based cards from the Radeon 6000 and 7000 families that Llano is compatible with.
 

MasterCATZ

Distinguished
Feb 19, 2009
19
0
18,510
mmm

my post never posted it seems


Any 10 bit Video playback tests available with single Core non GPU assist

any thing can play 8 bit with GPU assistance
but as their is nothing out their that can do 10 bit with hardware acceleration
I would love to see if the Trinity can do it as my C2D 6600 can not

( XBMC 12 FPS using shader acell on 1080p 3 fps software mode)
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980


The HD7750's reference got upped from 800MHz to 900MHz? Also, yeah, another reason I would think it's stupid is 'coz people would still purchase HD7750's for example to make a potentially more efficient CF with their Trinity APU's if they had GCN. For now the HD7750 is AMD's lowest-end card right? I could imagine an HD7400 sometime in the future. Oh wait! Wiki says that there are a bunch below the HD7750 but are only available to OEM's. http://en.wikipedia.org/wiki/Southern_Islands_(GPU_family)#Chipset_table Well OEM's could've sold systems with Dual-graphics. But wait, they do look like older architectures based on the indicated 40nm fabrication process. (Dang it, sorry for being out of the know here.) You said VLIW5. Would GCN APU's be compatible with those?

Oh, maybe that's what AMD's plan is. Since Trinity APU's have VLIW4, it might be more "compatible" with the architectures of those OEM HD7000's for Dual-graphics. Or maybe, it's the opposite that AMD just found a way to make do with a strategy like this.
 


Crossfire, unlike SLI, is software, not hardware. It doesn't care a whole lot about the GPU architecture. Having the same GPU architecture makes it simpler and easier to implement, but is not necessary.The 7750 is AMD's lowest end card that uses the GCN architecture for its GPU, but AMD does have lower end cards (both OEM and retail) from the Radeon 6000 and 7000 families that use the older 40nm VLIW5 architecture.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
[citation][nom]blazorthon[/nom]Crossfire, unlike SLI, is software, not hardware.[/citation]
That's really interesting. Then it seems like it's totally superior to SLI from everything I've heard about it so far, better scaling and cross-compatibility. It still does scale down performance to the weakest link right in a way that it isn't quite on par with a perfected Hydralogix?
You continue to amaze me with what you know blaz. Where'd you even learn about CF being software-based and SLI being hardware-based? (Just in case you take this as sarcastic, it really isn't. Hehe...) :)
 
G

Guest

Guest
You did not wite what version of SKYRIM you used for the benchmark. It turns out that original versiona and many first patches were compiled for PENTIUM. In patch 1.4 after many complains from users they finally compiled the games using SSE and etc. That is why the performance went drasticly up. If you were using original version I believe this information is resposnisble for strange behaviour you have observed
 

kartu

Distinguished
Mar 3, 2009
959
0
18,980
[citation][nom]Anonymous[/nom]Well, where are the Ivy/Sandy i5's and i3's???Once they are pitted against each other, that will be A TRUE measure of the APU Trinity's marketability[/citation]
Something tells me this comparison wouldn't favor Intel as far as gaming goes... ;)
 

shin0bi272

Distinguished
Nov 20, 2007
1,103
0
19,310
[citation][nom]bawchicawawa[/nom]Because this is an article of amd's apus. They've already done a comparison between trinity's igp's and intels 4000 series.[/citation]
You missed the point entirely but thanks for playing.
 
[citation][nom]army_ant7[/nom]That's really interesting. Then it seems like it's totally superior to SLI from everything I've heard about it so far, better scaling and cross-compatibility. It still does scale down performance to the weakest link right in a way that it isn't quite on par with a perfected Hydralogix?You continue to amaze me with what you know blaz. Where'd you even learn about CF being software-based and SLI being hardware-based? (Just in case you take this as sarcastic, it really isn't. Hehe...) :)[/citation]

CF doesn't inherently scale better than SLI. How well it scales depends on the GPUs. For example, VLIW5 GPUs tend to scale slightly worse in CF than Fermi GPUs scale in SLI and the VLIW5 GPUs tend to be much more stutter-prone. VLIW4 GPUs and GCN GPUs tend to scale much better than Fermi and have less stutter. CF does tend to scale down to the weakest link except in asynchronous CF modes such as one or two discrete cards in CF with an AMD IGP, so yes, CF can be considered a little inferior to Lucid's multi-GPU technology, but CF tends to scale a little better and if situations where the weakest link isn't much weaker than the strongest; CF can beat Lucid somewhat in many situations.

SLI on Kepler is also very well-scaling, although it doesn't seem as friendly with three GPU setups as CF is with GCN.

About where I've read all of this? Well, it pops up in many different places. When I want to learn something, I'll often spend quite a bit of time looking into it. Crossfire doesn't need to be software, it is also hardware, but software CF can do so much more than hardware CF. Hardware CF, like hardware SLI, must be supported by a motherboard and the cards for it to work, whereas software CF can work on pretty much any motherboard and a wider variety of cards and in a wider variety of ways.

[citation][nom]tourist[/nom]Ant please note, blaze is talking low end cards not needing crossfire bridge. 7770 and up will still need the bridge for optimal bandwidth.[/citation]

Actually, that is not exactly true. If you have boards with great enough PCIe bandwidth, the bridges are still not always necessary. There are many factors for this. For example, the bridge only needs to be able to transmit data equal to the total amount of displays to and from the card that it is on between each card in the system. For example, a CF system with three 7970s and one 1080p display attached to each card would make a 5760x1080 Eyefinity configuration and each card's bridges need to be able to send in at least enough bandwidth for the one 1080p display that they have for frames rendered by the other two cards and enough bandwidth to send out data to the other two displays for frames rendered by that card.

So, each CF dongle needs to have at least enough bandwidth for one 1080p display in this example because you would ideally use two dongles for three cards and each dongle only needs to be able to transmit a single 1080p frame at a time. Assuming V-Sync of some sort is enabled and frames are limited to 60FPS and that we're using 32 bit color (24 bit color + 8 alpha), that's just under 500MB/s. So, theoretically, if you are using a board that has at least 32 PCIe 3.0 lanes and each card is getting at least 8 PCIe 3.0 lanes, you could probably not use the bridges and still be just fine. If you had only two 7970s in CF and still used this resolution and board, then the 16 PCIe 3.0 lanes per card would almost be enough, especially considering that with the bridge, even PCIe 3.0 x8 is enough. Even with a 120Hz 2560x1600 display (needing a maximum of about 2GB/s in these circumstances) should be far within practical limits.

Theoretically, Nvidia could do something similar with SLI, but I don't think that current high end Nvidia cards can do this and assuming that they can't, I don't think that Nvidia cares to add such functionality. However, I'll not pretend that ALI never had advantages. It didn't always scale better, but it did often have less micro-stutter than a similarly CF setup. However, with VLIW4 and GCN, this advantage waned quickly.
 
[citation][nom]tourist[/nom]You can crossfire a 7770 without the bridge ? i know the 7750 will do it but never heard of the 7770 being able to do so.[/citation]

Yes. You can Crossfire pretty much any modern AMD graphics card without a bridge if you want to, but if you don't have enough PCIe bandwidth, this is not a good idea.
 
[citation][nom]tourist[/nom]i would like to see a link if you got it on the 7770 crossfire without bridge. It is my understanding you still need the bridge on a 7770[/citation]

You can do even 7970 CF without a bridge. Yes, the 7770 does not need one for all situations. You need to have sufficient PCIe bandwidth and not have a ridiculously high MP and Hz display configuration. For example, you're better off using something like 1080p with very high AA instead of similarly performing 2560x1600 with little to no AA.

Basically, you need each card to have enough PCIe bandwidth for all of the displays in addition to its other PCIe bandwidth needs. A motherboard with PLX would be preferable, but not always necessary.

For example, consider a Z77 motherboard without PLX and thus having 16 PCIe 3.0 lanes for graphics. The board can run two PCIe x16 cards in x8/x8. The 7750 and 7770 can probably get away with not needing more than 8 lanes of PCIe bandwidth when they don't use a bridge (granted the 7750 obviously doesn't have a bridge as being an option). You probably couldn't do this properly with much higher end cards without more PCIe bandwidth, but I have no doubt that the 7770 is capable of it. I'll see if I can find some benchmarks of it.
 
[citation][nom]tourist[/nom]What about the sata ports they share the pcie bandwidth.[/citation]

PCIe bandwidth has nothing to do with the SATA ports except for SATA ports on PCIe adapters and even then, it doesn't affect the other PCIe lanes that are intended for graphics cards.
 
[citation][nom]tourist[/nom]what about a revo drive ?[/citation]

A PCIe drive? Well, if it's using PCIe lanes that were intended for a graphics card, then it doesn't matter if you're doing software or hardware Crossfire because at that point, it will hurt graphics performance either way. I'm also not aware of many gamers who would use a PCIe SSD on a system with insufficient PCIe lanes for it and their graphics cards.

Besides, most computers have at least one other PCIe 2.0 x4 slot, so unless we're talking about the extremely fast PCIe SSDs that can do many GB/s of throughput, most computers would not have to sacrifice PCIe throughput to the graphics in order to use a high-speed PCIe SSD. Anyone using such an expensive SSD would not use a cheap motherboard that can't supply enough PCIe bandwidth to all of their devices for whatever they're doing.
 


That would not affect the PCIe graphics lanes. The single lane slots generally do not share lanes with the graphics and if they did, it would negatively affect Crossfire with or without a bridge.
 


That shouldn't happen. There might be something wrong with your motherboard or maybe there are Llano-specific problems with this. That wouldn't surprise me since Llano is not intended to be a high end setup. Also, why get a PCIe x1 SSD? Even a good SATA3 SSD should be faster. Only PCIe x4 or better SSDs have an advantage over SATA3.
 


Can you link your motherboard or even better, its specifications on its manufacturer's site? I'd like to take a look into this.
 


I think that I see your problem, that is if you're still trying to use CF with two discrete cards in that board. The second PCIe x16 slot only has four lanes, not 8 or 16 lanes, so it is probably tied to the x1 slots. Devices placed in an x1 slot can often interfere with a device in an x4 slot. If you're not using it and are only usign the true x16 slot, then I'm not sure about why you would have these problems unless Windows and/or the motherboard have one or more problems.

Also, if it's an x4 device, then it can't fit in an x1 slot. If it's an x1 device, then it should work in both the x4 slot and the x1 slot. If it is an x4 device, then it should have a PCIe connector that is larger than an x1 slot and thus can't fit into an x1 slot.
 


If it doesn't fit in the x1 slots, then it is almost definitely an x4 drive if it fits in the x4 slot. Keep in mind that many motherboards have problems with PCIe SSDs. The motherboard is probably the cause for your problems. The on-die GPU might cause problems, but I don't think that it would do something like this. This looks like a motherboard/BIOS problem that could only be fixable by using a different model board. You could try a BIOS update, but that board is probably your issue here. Your board is a fairly low end board and it might not have been built with Gigabyte expecting any users of that board to have high-end PCIe SSDs.
 

army_ant7

Distinguished
May 31, 2009
629
0
18,980
[citation][nom]blazorthon[/nom]CF doesn't inherently scale better than SLI. How well it scales depends on the GPUs. For example, VLIW5 GPUs tend to scale slightly worse in CF than Fermi GPUs scale in SLI and the VLIW5 GPUs tend to be much more stutter-prone. VLIW4 GPUs and GCN GPUs tend to scale much better than Fermi and have less stutter. CF does tend to scale down to the weakest link except in asynchronous CF modes such as one or two discrete cards in CF with an AMD IGP, so yes, CF can be considered a little inferior to Lucid's multi-GPU technology, but CF tends to scale a little better and if situations where the weakest link isn't much weaker than the strongest; CF can beat Lucid somewhat in many situations. SLI on Kepler is also very well-scaling, although it doesn't seem as friendly with three GPU setups as CF is with GCN. About where I've read all of this? Well, it pops up in many different places. When I want to learn something, I'll often spend quite a bit of time looking into it. Crossfire doesn't need to be software, it is also hardware, but software CF can do so much more than hardware CF. Hardware CF, like hardware SLI, must be supported by a motherboard and the cards for it to work, whereas software CF can work on pretty much any motherboard and a wider variety of cards and in a wider variety of ways.[/citation]
Thank you for all those clarifications and specific case details. So what you're saying is that while SLI only has a hardware less flexible hardware implementation, CF has both a hardware and a more flexible software implementation? If this is the case, does the hardware implementation of CF have any advantages over the software version? Is there a way to choose between hardware and software implementations with CF if both are possible? When you say "software," you mean it's managed by the drivers? Is this the reason why mobos support CF but not SLI? I'm thinking that last one might be more of chipset thing. Thanks!

[citation][nom]blazorthon[/nom]If you have boards with great enough PCIe bandwidth, the bridges are still not always necessary. There are many factors for this. For example, the bridge only needs to be able to transmit data equal to the total amount of displays to and from the card that it is on between each card in the system. For example, a CF system with three 7970s and one 1080p display attached to each card would make a 5760x1080 Eyefinity configuration and each card's bridges need to be able to send in at least enough bandwidth for the one 1080p display that they have for frames rendered by the other two cards and enough bandwidth to send out data to the other two displays for frames rendered by that card. So, each CF dongle needs to have at least enough bandwidth for one 1080p display in this example because you would ideally use two dongles for three cards and each dongle only needs to be able to transmit a single 1080p frame at a time. Assuming V-Sync of some sort is enabled and frames are limited to 60FPS and that we're using 32 bit color (24 bit color + 8 alpha), that's just under 500MB/s. So, theoretically, if you are using a board that has at least 32 PCIe 3.0 lanes and each card is getting at least 8 PCIe 3.0 lanes, you could probably not use the bridges and still be just fine. If you had only two 7970s in CF and still used this resolution and board, then the 16 PCIe 3.0 lanes per card would almost be enough, especially considering that with the bridge, even PCIe 3.0 x8 is enough. Even with a 120Hz 2560x1600 display (needing a maximum of about 2GB/s in these circumstances) should be far within practical limits. Theoretically, Nvidia could do something similar with SLI, but I don't think that current high end Nvidia cards can do this and assuming that they can't, I don't think that Nvidia cares to add such functionality. However, I'll not pretend that ALI never had advantages. It didn't always scale better, but it did often have less micro-stutter than a similarly CF setup. However, with VLIW4 and GCN, this advantage waned quickly. You can do even 7970 CF without a bridge. Yes, the 7770 does not need one for all situations. You need to have sufficient PCIe bandwidth and not have a ridiculously high MP and Hz display configuration. For example, you're better off using something like 1080p with very high AA instead of similarly performing 2560x1600 with little to no AA. Basically, you need each card to have enough PCIe bandwidth for all of the displays in addition to its other PCIe bandwidth needs. A motherboard with PLX would be preferable, but not always necessary.For example, consider a Z77 motherboard without PLX and thus having 16 PCIe 3.0 lanes for graphics. The board can run two PCIe x16 cards in x8/x8. The 7750 and 7770 can probably get away with not needing more than 8 lanes of PCIe bandwidth when they don't use a bridge (granted the 7750 obviously doesn't have a bridge as being an option). You probably couldn't do this properly with much higher end cards without more PCIe bandwidth, but I have no doubt that the 7770 is capable of it. I'll see if I can find some benchmarks of it.[/citation]
I thought that when you want multi-monitor setups, or at least Eyefinity setups with CF, you connect all monitors to one card while the other(s) remain free. Or is it more efficient to distribute the monitors so that whole frames from the other cards don't suffer from a bottleneck getting to the card with all the monitors and possibly alleviate the need of CF bridges? Or is that just the way you make an Eyefinity setup with CF? Wait, does one card even handle a whole Eyefinity frame, 5760x1080 for example, on its own or does a frame get divided depending on the monitors attached to each card? I'm guessing the former since the latter sounds more idealistic only when there are equally powerful cards with equal amounts of monitors, 1 or 2 each.
Is that the only purpose of CF bridges nowadays, to transfer sync frames in a way? How much bandwidth does a CF bridge offer?
Also, there's pretty much no harm in using a CF bridge even if your mobo/CPU already has enough PCI-E lanes right? I'm not sure if this is the case with every card that can use a CF bridge, but they come with CF bridges right? I'm guessing not, but do mention if so or not. I was thinking it was a general AMD thing to make cards come with CF bridges, but maybe the two Radeon cards I've had so far from XFX and Gigabyte just happened to come with them.

[citation][nom]blazorthon[/nom]PCIe bandwidth has nothing to do with the SATA ports except for SATA ports on PCIe adapters and even then, it doesn't affect the other PCIe lanes that are intended for graphics cards.[/citation]
Are PCI-E lanes reserved in a way that only graphics cards have access to some/most of them? Oh wait! Or do you mean that when you already have (a) graphics card(s) inserted, the chipset or something assigns lanes like x16, x16/x8, x8/x8/x8, etc for example, for exclusive use of (a) graphics card(s)?

SO MANY QUESTIONS!!! Sorry about that. A lot of thoughts popped in my head.

BTW, on a personal side note, I just remembered an old topic we talked about blaz concerning PCI-E 2.0 and 3.0 cross-compatibility, where you also mentioned a PLX chip as a factor. Would you mind elaborating on PLX chips, PCI-E ones in particular in case there are others? I found a company on Wikipedia named PLX that might have something to do with it as it was described. Anyway, back to the side note. You, blaz, along with tourist had a pretty heated debate around that time and I'm just happy to see that you guys are getting along.
 
[citation][nom]army_ant7[/nom]
Thank you for all those clarifications and specific case details. So what you're saying is that while SLI only has a hardware less flexible hardware implementation, CF has both a hardware and a more flexible software implementation?
[/citation]

Yes.

[citation][nom]army_ant7[/nom]
If this is the case, does the hardware implementation of CF have any advantages over the software version? Is there a way to choose between hardware and software implementations with CF if both are possible? When you say "software," you mean it's managed by the drivers? Is this the reason why mobos support CF but not SLI? I'm thinking that last one might be more of chipset thing. Thanks!
[/citation]

The hardware version doesn't need as much PCIe bandwidth between cards. I'd have to look into it more to properly answer the other questions in this section, excluding the last one. Mobos that don't support CF are not supporting hardware CF, but software CF should still work. It is probably handled by the drivers and there probably is a way to choose between hardware and software CF (where both versions are applicable), but I confirm neither or these.

[citation][nom]army_ant7[/nom]
I thought that when you want multi-monitor setups, or at least Eyefinity setups with CF, you connect all monitors to one card while the other(s) remain free. Or is it more efficient to distribute the monitors so that whole frames from the other cards don't suffer from a bottleneck getting to the card with all the monitors and possibly alleviate the need of CF bridges? Or is that just the way you make an Eyefinity setup with CF? Wait, does one card even handle a whole Eyefinity frame, 5760x1080 for example, on its own or does a frame get divided depending on the monitors attached to each card? I'm guessing the former since the latter sounds more idealistic only when there are equally powerful cards with equal amounts of monitors, 1 or 2 each. Is that the only purpose of CF bridges nowadays, to transfer sync frames in a way? How much bandwidth does a CF bridge offer?
Also, there's pretty much no harm in using a CF bridge even if your mobo/CPU already has enough PCI-E lanes right? I'm not sure if this is the case with every card that can use a CF bridge, but they come with CF bridges right? I'm guessing not, but do mention if so or not. I was thinking it was a general AMD thing to make cards come with CF bridges, but maybe the two Radeon cards I've had so far from XFX and Gigabyte just happened to come with them.
[/citation]

Some people might not do it this way, but the Eyefinity setups that I've seen had one monitor per card when there was three cards. Each GPU renders an entire frame, even with Eyefinity setups that have displays attached to other cards. There should never be any negative consequences caused by using one or two CF bridges, even in situations where they aren't necessary. I don't know if it's the only purpose of a CF bridge, but I'm sure that it is at least a primary purpose of the CF bridges. They function as an additional connection between the cards.

I don't think that every AMD card with a CF dongle always comes with a bridge, but many (perhaps most) AMD cards that have one or more CF dongles do come with a bridge.

[citation][nom]army_ant7[/nom]
Are PCI-E lanes reserved in a way that only graphics cards have access to some/most of them? Oh wait! Or do you mean that when you already have (a) graphics card(s) inserted, the chipset or something assigns lanes like x16, x16/x8, x8/x8/x8, etc for example, for exclusive use of (a) graphics card(s)?

SO MANY QUESTIONS!!! Sorry about that. A lot of thoughts popped in my head.

BTW, on a personal side note, I just remembered an old topic we talked about blaz concerning PCI-E 2.0 and 3.0 cross-compatibility, where you also mentioned a PLX chip as a factor. Would you mind elaborating on PLX chips, PCI-E ones in particular in case there are others? I found a company on Wikipedia named PLX that might have something to do with it as it was described. Anyway, back to the side note. You, blaz, along with tourist had a pretty heated debate around that time and I'm just happy to see that you guys are getting along.
[/citation]

x16/x8 PCIe slots are generally used by graphics cards. Technically, you can probably use them for whatever you want to, but they are generally reserved and intended for graphics cards. The chipset can have something to do with the lanes, but it's not always the deciding factor. A PLX chip is a type of chip that can let graphics cards have more PCIe lanes than they would otherwise be given. For example, a Z77 board with a PLX chip can run dual PCIe 3.0 x16 for the graphics. An X79 board with two PLX chips can have quad PCIe 3.0 x16 for the graphics. PLX chips basically double the PCIe lanes that are intended for graphics because they can take a 16 lane connection to the CPU and make two 16 lane or four 8 lane (or any such configuration of 32 lanes) connections.
 
Status
Not open for further replies.