PCIe 3.0 is Here; PCIe 4.0 Already in the Works

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

madjimms

Distinguished
Mar 7, 2011
448
0
18,780
[citation][nom]11796pcs[/nom]Check out the new Apple PCs, they're the only products I've seen that use the ports. Also does anyone actually understand the picture in the article- it doesn't seem to actually correspond to the article. Also the bit-interfaces on graphics cards- how do they relate to this new standard- would they have to be upgraded before the new interface could be utilized completely? Also since onboard GPUs are ON the mobo how does their speed with the CPU relate vs a discrete card.[/citation]
Please don't put "Apple" comments on a monumental news story like this, it just ruins the mood.
 

Pyroflea

Distinguished
Mar 18, 2007
2,156
0
19,960
The fact of the matter is, PCI-e 2.1 is faster than any current hardware could efficiently utilize. Rather than working on these new standards that push bandwidth limits even higher, why don't we put more focus on being able to actually utilize said bandwidth?
 

someguynamedmatt

Distinguished
[citation][nom]Proxy711[/nom]Id rather them be way ahead of the game then behind and have issues.[/citation]
Very true... you could think of it that way, too.

Just imagine what a game could look like, if it was made with absolute PC optimization in mind, made for nothing other than taking full advantage of current and future hardware.
I wonder if anyone is thinking the same thing about that as I am right now.
Crysis.
You know, the game that was used to test hardware up until a year ago? It was so beautiful, even being made back that long ago. Now compare that year's hardware to what we have now. Just imagine how beautiful a game we could make if we kept that same proportion...
 
[citation][nom]Pyroflea[/nom]The fact of the matter is, PCI-e 2.1 is faster than any current hardware could efficiently utilize. Rather than working on these new standards that push bandwidth limits even higher, why don't we put more focus on being able to actually utilize said bandwidth?[/citation]

Except we are pushing the limits of current PCI-e 2.0 / 2.1, just not in the desktop department. Enterprise FC SAN's currently use 8GFC as the standard per-port speed with 16GFC already approved and on the near term horizon. Heavy virtualization has lead to host systems needing 2~4 HBA's with 2x FC ports each along with 2~4 4x 1Gbe or 10Gbe ports depending. With this heavy requirement for large I/O the current generation of PCIe isn't going to cut it. Not to mention stuff like fusion-io drives already tax PCI-e bandwith and those have "Enterprise use only" written all over them.
 

kelemvor4

Distinguished
Oct 3, 2006
469
0
18,780
[citation][nom]Pyroflea[/nom]The fact of the matter is, PCI-e 2.1 is faster than any current hardware could efficiently utilize. Rather than working on these new standards that push bandwidth limits even higher, why don't we put more focus on being able to actually utilize said bandwidth?[/citation]
Here's the real answer. Because focusing on utilizing the bandwidth does not sell motherboards, GPU's and other associated hardware. Arbitrarily upgrading the standard does.

Same thing that happened with AGP, really. Want proof? look at the last pcie video cards ati made (only a couple years ago) and compare them to the same card in the pcie flavor. Not much difference performance wise....

There is some value here in the enterprise class server market; and I'm not talking about your dell PowerEdge class equipment, either.
 
Except that recent benchmarking shows that PCIe 8x enjoys a significant performance jump over PCIe 4x, and that PCIe 4x enjoys an even better performance jump over AGP 8x.

Its not about performance ~now~, its about performance 2~3 years from now. Also to note that higher capacity per lane allows you to use less lanes for equivalent data rates. Which was the entire point behind my enterprise reference. Enterprise systems are running into a problem where their needing multiple bridge chips to accommodate the required I/O bandwidth. Raiding the speed per land allows you to pack more data into the 4x and 1x form factors.
 

JOSHSKORN

Distinguished
Oct 26, 2009
2,395
19
19,795
[citation][nom]zorky9[/nom]... and a CPU to handle all the bandwidth.I'll wait for Ivy Bridge and mobo/GPU with PCIe 3.0 for my next upgrade.[/citation]
...AND a game that might utilize THAT much GPU.

Yeah I'm also waiting for that for my next upgrade. That, and DDR4. I believe that's right around the corner.
 
G

Guest

Guest
[citation][nom]kelemvor4[/nom]look at the last AGP video cards ati made (only a couple years ago) and compare them to the same card in the pcie flavor. Not much difference performance wise....There is some value here in the enterprise class server market; and I'm not talking about your dell PowerEdge class equipment, either.[/citation]

a typo, but I understand what you're saying. The biggest advantage of pcie was that is could be used more than just graphic. Another problem with AGP was it could not supply enough power and even the use of a power cable can't solve that problem. GPU are power hunger. A new standard was need.
 

g00ey

Distinguished
Aug 15, 2009
470
0
18,790
Sure it's nice that the PCIe standard keeps improving but it would also be nice if they put some effort into making it easier to use more efficiently.

The problem as it is today is that lower spec PCIe cards (PCIe 1.0, 2.0, ...) use up lanes even though they are PCIe3.0 and yet don't use the full capacity of PCIe3.0, needlessly blocking a lot of potential bandwidth.

So, they should also put some effort into developing more intelligent circuitry that more efficiently reroutes the bus bandwidth. That way an x16 PCI2.0 card should effectively use only x8 PCIe3.0 bandwidth.

As it is today, a PCIe card configuration that is set up in a wrong way can lower the performance because the motherboard is unable to provide the full lanes for the cards they are designed for.
 
G

Guest

Guest
And the glReadPixel performance will still be as bad as before because of crippled OpenGL drivers from AMD and nVidia.
 
LAPTOPS, DESKTOPS, PCIe3.0 and Cables:

One thing really overlooked is the fact that the bandwidth requirements in many cases will

DECREASE. Here's what I mean:

If we have a "Graphics Box" for a laptop which contains only a Graphics card then replacing that

portion of the motherboard PCIe bus with an extension cable makes sense. However, I believe that

the APU (CPU + GPU) will be the processing element of choice for laptops. The main reason for

this, as any gamer knows, is the issue of bottlenecking the CPU. The CPU in modern laptops has

little room for a heatsink which limits its processing capability (and laptops are getting thinner). A computer is either bottlenecked by the CPU or the GPU so an efficient laptop is limited in how powerful the graphics option will be. So if you want a lot of processing power in the form of an add-on BOX it's simply best to avoid any bottlenecking issues and heat issues and build all processing into the external box.

Since the CPU portion (as an APU) is now moved externally we should have a second set of RAM to

be used with the CPU. It need not be the full System Memory, just a portion of it.

Removing the computer (desktop or laptop) as a source of major bottlenecks by moving all the important processing and memory components to an external box removes all the confusion surrounding bottlenecks. A consumer can simply pick up the BOX which fits his budget and needs then plugs it in! A great benefit to the laptop in not using its onboard CPU for major processing is the reduction of noise. Small, high-speed CPU fans are very annoying. An external BOX, on the other hand, would be comprised likely of two large heatsinks (both sides) and a large slow-moving fan.

Laptop Gaming Scenario:
A highly efficient, low-power business laptop is hooked up to an external BOX (via the PCIe

cable). The box is already hooked up to a desktop monitor. The laptop is passing through its

video data (the external BOX need not be on for this).

Now the BOX is turned on and a game is loaded:
1) the game, "Crysis 4" is copied from the hard drive to the RAM in the BOX (not the laptop)

directly via the PCIe cable.

2) the BOX's APU provides the processing. *NOTE: it may make sense to have a single Memory Unit

comprised of the VRAM + RAM2.

3) the BOX sends the image to the monitor

Analysis:
We used the PCIe cable only for copying the hard drive game data. Even with an SSD we could have

easily done this with USB3. So why don't we have "APU boxes" for laptop gaming via USB3

connections?

DESKTOP:
The real bottleneck in the desktop dealing with the PCIe bus is communicating between the CPU and

Graphics cards. The demand for PCIe bandwidth obviously increases with faster graphics cards and

CPU's but it IS reduced in the following scenarios:

a) as Graphics Cards assume more of the processing this lessens the CPU->GPU burden as this

happens AFTER the PCIe bus.

b) APU's will probably replace the normal CPU + Graphics card scenario. In a video game scenario,

the main CPU could be used simply to send the data in the System RAM to the APU's to be

processed. It's difficult to say if this scenario works however, because now we have the PCIe bus

in place of the high-speed CPU-SYSTEM MEMORY interface.

It's most likely that the optimal solution would be to have a system augmented with an APU

Graphics card but with some of the processing still done by the CPU.

Summary:
- We could build an external processing box comprised of an APU+RAM today using only USB3 for

laptops and desktops
- APU augmented desktop graphics cards may be ideal for obtaining the maximum processing power

(before bottlenecking the PCIe bus) in a desktop

(NOTE: I've only touched on certain aspects of PCIe and the use of the cable. I'm certainly not saying we don't need increased bandwidth; I'm merely commenting that in certain scenarios we don't need that bandwidth as much as in other scenarios.)
 

josejones

Distinguished
Oct 27, 2010
901
0
18,990
Wow, what a major disappointment - none of these new 990 motherboards even have HyperTransport (HT) rev 3.1 and they still only have PCI ex 2.0 yet, they're all way over priced.

http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007625%20600138080%20600166242&IsNodeId=1&name=AMD%20990FX&nm_mc=AFC-C8Junction&cm_mmc=AFC-C8Junction-_-RSSDailyDeals-_-na-_-na&AID=10521304&PID=4176827&SID=1s01ctot1tr0b%3Cbr%20/%3E

At those prices they should have HT 4.0, PCI ex 3.0 and Light Peak too!

Searching the HT website and PCI history, according to the 'HyperTransport Consortium' website HT 3.0 has been out since 2006 and rev. 3.1 has been out since 2008.

http://www.hypertransport.org/default.cfm?page=Technology

PCI Ex 2.0 has been around since Jan of 2007.

http://en.wikipedia.org/wiki/PCI_Express#PCI_Express_2.0%3Cbr%20/%3E

It really is time for HT 4.0 and PCI Ex 3.0.


PS

I notice the Asus Crosshair V Formula/ThunderBolt has some updates on lag (at .27) and ping (at 1:30) though in the Asus youtube video -

http://www.youtube.com/watch?v=dE6TAyJbqso


That's pretty cool to me since I do play some online games where ping & lag are a real problem. I'd like to see more about that. I wonder just how much better is it, really?

Anyway, what are some other good uses for PCI Express???
 
G

Guest

Guest
crysis in deed test the gpc issue but we know it was a dx 11 windows 7 hold back as the new crysis 2 patches show crisp details but to 75% values as http://www.youtube.com/watch?v=ytOF0sMqO9c states , so what the real deal is heat issues your dealing with constant full gpu operations at melt down aspects ,new processors are in the pipelines based on i think it was carbon or graphite can't remember but the tv show was on last week where speeds where blinding with heat not an issue and indeed blinded my mind as i can't remember , so in hind sight we will most likely get that 1080p true 3d no glasses gaming experience hopefully in time for my kids when they hit 13 , in the mean time my lan party am2+ 3ghzoc'd500mhz ddr2 with a new ati 6000 card hopefully will land me some frag BF3 YALL,but that fatal1ty z68 is looking yoney as the missus i5 boots faster lol
 
G

Guest

Guest
You're thinking of graphene-based processors. Punctuation is nice...
 

yumri

Distinguished
Sep 5, 2010
703
0
19,160
for the people who are talking about GPUs and speed that is most likely not what PCI-Express 3.0 was devolped becuase. The only applications today which need that much bandwidth are enterprise applications ranging from mulitple 1Gbe on RJ-45 links to only a couple of the highest 40Gbe on infiniBand links which then yes they will need the higher band width for their roadmap of going up to with a roadmap to get to 300Gb/s by 2013. This personly makes me think the people needing such speed on the network will also need the internal speed too.
Enterprise SSDs using PCI-e x4 links will probably be able to see a speed boost when they start supporting a PCI express x4 slot. The last thing I think will saturate the PCI express 3.0 slot will be a graphics card as others which would actually use the higher bandwidth already would have used it up by then.
 
Status
Not open for further replies.