PCI Express 3.0: On Motherboards By This Time Next Year?

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

g00ey

Distinguished
Aug 15, 2009
470
0
18,790
PCIe 3.0 sounds sweet, but I'm a lot more curious as to what they are going to do to increase the number of lanes so that the full bandwidth of the PCIe3.0 will be utilized. As it is today, I've seen AMD motherboards with up to 4 PCIe x16 slots but the problem is that there are not enough lanes to give full-blown x16 communication for all these slots; you can either get 4 x8 slots or 2 x16 (I'm not sure what happens to the other two when they operate in 2 x16 mode). I have also seen Intel motherboards sporting up to 7 PCIe x16 slots. I don't know too much about the Intel chipset (I think it is x58) but I think there are similar problems there too.
 

neiroatopelcc

Distinguished
Oct 3, 2006
3,078
0
20,810
[citation][nom]g00ey[/nom]PCIe 3.0 sounds sweet, but I'm a lot more curious as to what they are going to do to increase the number of lanes so that the full bandwidth of the PCIe3.0 will be utilized. As it is today, I've seen AMD motherboards with up to 4 PCIe x16 slots but the problem is that there are not enough lanes to give full-blown x16 communication for all these slots; you can either get 4 x8 slots or 2 x16 (I'm not sure what happens to the other two when they operate in 2 x16 mode). I have also seen Intel motherboards sporting up to 7 PCIe x16 slots. I don't know too much about the Intel chipset (I think it is x58) but I think there are similar problems there too.[/citation]

What happends is fairly simple and straight forward!

On P55 you have 16 lanes via cpu. If one pcie card is used, all 16 go to that one card. If you add a second, the system will automaticly split them to two x8 connections.
On your motherboard, with x58 you have up to two cards operating at x16, and if you add another you'll get x16 and two times x8, and with four you get four times x8. It all happends automaticly, although in some bios you can actually force the system to split even if it isn't nessecary. I don't know how the last 3 slots on your board are connected. Probably it'll either get x4 on all, or it's got an nvidia bridge to split lanes between all of the slots.
I don't know the lane numbers on amd, but the system's similar.


ps. on old pcie1 boards, you sometimes had to move physical jumpers or switches to change stuff. But not anymore.
 

ta152h

Distinguished
Apr 1, 2009
1,207
2
19,285
[citation][nom]liquidsnake718[/nom]Well then one could argue that developers like Ati and Nvidia should focus on using onboard GPU's/integrated gpus and develope something to replace those pcie2.0 or 3.0 lanes for bandwidth...or connect these integrated gpus in a way that they can maximize the pcie bandwidth.A usb 3.0 connection for a gpu is a good idea and im sure it has been thought of already.[/citation]

I'm sensing you don't understand how these things work.

You don't need bandwidth if you put enough memory on the card to begin with. NVIDIA can't do anything like that anyway, since they have no control over the motherboard. ATI is moving the GPU to the processor, Intel has already moved it closer. These are for small GPUs right now, but no doubt will get bigger with time.

USB would be a horrible idea for a GPU connection. Again, I'm pretty sure you're not looking at everything. You probably are not taking into account latency, which is very important. USB 3.0 has extremely high latency compared to a PCIE slot, and in fact, anything using USB 3.0 has to go all the way through the PCIE interface anyway.

Bandwidth doesn't matter for cards unless they use more memory than the card has, but latency is always part of it. If you're taking forever to get the commands to the GPU from the CPU, which is related to latency, you'll see dreadful performance regardless.

USB is best known as "Universally shi**y bus". It's always been poor, inefficient and likely always will be. It's only useful when performance doesn't matter much, being clearly an implementation of convenience over performance. Virtually every other interface is superior in efficiency, if it's modern. It's not the answer for high performance parts, and never was intended to be.

 

stasdm

Distinguished
Feb 10, 2009
53
0
18,630
One more big lie from nVidia - ther is no graphics card thac use more than 1.2GB/s in the market to-day. So PCIe v.3 is not really needed for them - it will take at least 10 more years to fully utilize v.2.0 bandwidth. The only purpose of nVidia to partisipate was to push on more powerful power connector and backward copatibility with their "Graphics PCIe" (no parity control) bus (still used in all nVidia cards).
 

stasdm

Distinguished
Feb 10, 2009
53
0
18,630
And a rather big mistake from the author - full-speed SATA 6G SSDs will come quite soon. Currently the 500MB/s one-lane PCIe is the main obstacle (multy-lane will be too expensive for consumer market). SAS versions a bit later (as usual - SSD should proof their reliability first)
 

insightdriver

Distinguished
Nov 28, 2006
157
0
18,710
I believe that those of us who bought X58 boards recently that include USB and PCI 3 are going to be pleased that our machines will be at the high end of the curve for at least a couple of years. In a technology like computers, where technologies are constantly and incrementally improving, it does take some smart choices for us who build our own to choose what technology we want now, vs realistic cost constraints we all face.

My previous generation machine was the last of the high end AGP 8 bus machines; the direct predecessor to PCIe. I had put the best AGP video card on market in that machine. Other than maxing the DDR2 ram, that machine is fading in the back while the current stuff surges ahead.
 

pochacco007

Distinguished
Aug 3, 2008
161
0
18,680
the fools who are bought into this gimmick. i still run a first generation pci express 16 and it's doing fine. the people who upgrade are the fools. look at current graphic cards, there is little to nothing that is worth the upgrade. what games are there to play?! starcraft 2?! that game doesn't need super graphic $500+ cards.
 

NuclearShadow

Distinguished
Sep 20, 2007
1,535
0
19,810
[citation][nom]pochacco007[/nom]the fools who are bought into this gimmick. i still run a first generation pci express 16 and it's doing fine. the people who upgrade are the fools. look at current graphic cards, there is little to nothing that is worth the upgrade. what games are there to play?! starcraft 2?! that game doesn't need super graphic $500+ cards.[/citation]

Your thinking awfully small minded by confining the PCI-E with simply video-cards and gaming. You may be content and have the performance you need but what you do is actually nothing compared to those whom actually would benefit from the PCI-E 3.0 at this current moment. You do realize there is a much larger picture than your little desktop right?
 

gaborbarla

Distinguished
[citation][nom]hardcore_gamer[/nom]finally.........[/citation]
Yeah exactly. Moore's law out the window there. I am all for double speeds of anything :) but honestly PCI-X speeds are probably overrated. In the past, before geometry calculations were done on board there was a lot of traffic on the AGP/PCI-X bus. This was also due to never having enough onboard RAM on our cards to store all the textures.
Now unless you run out of onboard GDDR5, textures are cached and the card is mostly talking to itself and not the CPU.
So now with double the bandwidth we going to achieve a 2%-5% increase in speed again. Yippie!

USB3? Now THAT we need bigtime.
 
G

Guest

Guest
Sheesh!
Double the bandwidth, only translates in slightly larger textures, on slightly larger resolutions when playing 3D.
I guess the main aim is to get computers working multi monitor mode.

As far as I'm concerned the ATI infinity is good enough to power 3 (to 5) monitors at a time, good enough for gaming.
I doubt we'll see plenty of people having a multimonitor setup (with more than 3 monitors) on their desktop.
2 monitors seems the (business) standard of today.
and 3 monitors seem like a gamer's dream.
I think more that 3D will be more of a hot issue than running 4 or 5 monitors on the desktop.
 

Alvin Smith

Distinguished
Not much mention of video editing rigs and 4K/RAW & HD-SDIx2 & 3D/HD ...

... SSDs have already been RAIDed (both internally and externally), WELL betond the stated saturation limits and Dual HD-SDI capture requires a good bit of bandwidth, as well ... With two HD monitors, for an edit timeline ... another monitor (or two) for media bins and shot lists ... and another monitor for output, well, ... When one combines firewire ingest and "pro-sumer" HD-SDI and RAIDed SSD render drives (multiple arrays), as well as bunches of monitors and DAW and control surface interfaces AND mercury playback engine, et. al. ... Well, I am laughing at any "excuses" for the delays, though more development equals fewer bugs, upon final release.

Good article. A little too kind to INTEL ... IMO.

= Al =
 

Travis Beane

Distinguished
Aug 6, 2010
470
0
18,780
I would pay for a motherboard with two PCIe 3.0 slots, and I would buy two ATI cards that are PCIe 3.0 only.
I would also expect that each card draws more than 75 watts from the bridge. I hate having to hook up extra cables to my graphics card when the power is there and waiting, all in the interest of compatibility.
 

thomasxstewart

Distinguished
Jan 16, 2006
221
0
18,680
Talkin Server With 32 Core per & over 1,000 threads, There PCIe 3.0 Have Run for Money. Toastee' Cap & Millon Dollar Bills.

Finally Vista 64 have room to move. Painted Wagons will EnCircle City. Of Mourning of Dusty Roads Where We Have Gone.

Nasta Drashek II
 

mdsiu

Distinguished
Oct 1, 2010
448
0
18,860
"Fusion makes no sense to me, since the GPU and CPU will not be connected using PCI-Express, and be on the same die. "

On-die graphics is far from the quality of even most low end discrete cards and is likely to stay that way.
 
Status
Not open for further replies.