Fouth Gen PCIe Sees Bandwidth Double

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Matthew Busse

Reputable
Jun 2, 2014
21
0
4,520
It is a pity that computer companies aren't moving to ssd storage, especially for laptops. Anything with a mechanical drive is one incredibly slow feeling. Even my mac pro at work feels like a lump.
 

mapesdhs

Distinguished
... Flight simulators such as FSX, ...

Don't know about the others but the only reason it helps for FSX is because
the game is staggeringly badly written. Constantly loading textures in the
way it does is just nuts for a flightsim.

Ian.


 

InvalidError

Titan
Moderator

In keynotes about improving DirectX and OpenGL performance on the back of Mantle's launch, the way 3D APIs have been working up to now has involved tons of unnecessary API calls and likely IO traffic too... so one could say D3D and OGL were pretty nutty to start with even on a good day.
 

mapesdhs

Distinguished


SGI was able to obtain very good performance out of its tech years ago with GL and then of course
OpenGL. I can only presume OGL has degraded & bloated since then, which is a shame. Realiability
was critical with IRx systems for defense, oil/gas, etc., and they were able to cope with massive
datasets without falling over, eg. the Group Station for Defense Imaging, which involved I/O rates
beyond 40GB/sec. In some ways I don't think anything has yet matched what a maxed-out Onyx3900
could do with 16xIR4, except of course where the feature set of that era was a limitation in some
manner. Even so, they could do some amazing things with Performer, which sits on top of OGL. Alas
how it's all come downhill since then in the world of APIs...

Btw, what I meant about FSX was that it's badly written period, not that something inherant to OGL/D3D
is holding it back. It just manages texture data very poorly compared to techniques for flightsims that
have been common practice for more than 20 years.

Ian.

 

InvalidError

Titan
Moderator

The OpenGL from 20 years ago is a considerably different critter from the OpenGL today. Among other things, shaders did not exist back then; that alone is a major game-changer in the way things get done in modern software along with all the glue that makes the new stuff fit with the old stuff.
 
Peripherals are currently strangled by the limited total bandwidth available, especially between the CPU and the Southbridge. This crops up a couple of times in this article: http://www.tomshardware.com/reviews/samsung-xp941-z97-pci-express,3826.html .
Additional bandwidth for teeny-tiny SSDs would be a good thing. Not to mention four-port USB 3.0 adapter cards with only enought PCI-E bandwidth to run one port at a time at full speed.
 

InvalidError

Titan
Moderator

Which devices would those be?

Most people are not going to lose sleep over the DMI since it does not become a significant bottleneck unless you throw outlandish devices and workloads at it: practically no consumer-level storage or other devices come anywhere near 1.5GB/s and DMI is full-duplex so it can easily handle device-to-device move/copy which is about the most IO-intensive task normal people will ever demand out of it. By the time it does, Intel will probably have integrated the PCH into the CPU - there already is some of that coming with Skylake's four extra PCIE/SATA-Express lanes.

For people who have extreme IO needs that cannot be met by DMI, there is the LGA2011 option.
 

InvalidError

Titan
Moderator

Each endpoint can both send and receive at 32GB/s simultaneously so the aggregate bandwidth is 64GB/s.

Somewhat of a dirty marketing trick when all other interfaces only advertise signal pair bandwidth.
 


The ones mentioned in the article that I linked to.
 

InvalidError

Titan
Moderator

Since DMI 2.0 runs at 20Gbps (~2GB/s net) each way, DMI alone does not explain why throughput drops under 1GB/s when switching to PCH's PCIe lanes - if you setup a RAID0 array with four fast SATA6 SSDs, you can get 1.2-1.5GB/s out of the z87 even after all the extra overhead this implies. Other parameters than bandwidth must be at play and latency should not be one of them either with command queuing hiding it.

I would hazard a guess that the PCH is simply not optimized to pass PCIE traffic over the DMI.
 
Status
Not open for further replies.

TRENDING THREADS