Does Your Storage Controller Affect The Performance Of An SSD?

  • Thread starter Thread starter Guest
  • Start date Start date
Status
Not open for further replies.
It's clear that Intel won over AMD in this, because 4k read/write and access time is what we care most about nowadays. It's a shame that AMD went for quantity over quality.

As a side note, when can we see an usb3.0 controller comparison with those new AMD and Intel chipsets?
 
The one thing the article didn't say, which it should, is that Marvell controllers
are garbage. Notice how often the P55 matches or beats one of the Marvell
6gbit controllers. The PCIe x1 link issue is bad enough, but sometimes even
having a proper connection doesn't help their performance.

Also not mentioned is SSD reliability. The only time I've ever had problems
with an SSD were when it was connected to a Marvell controller (eg. failed
fw update; move the SSD to an Intel port, the update then works ok).

Ian.

 
There is one very significant piece of information that is not included in this article. Which particular ports being used on the controller makes a big difference.

Most of the embedded chipsets (or external chipsets) carry a multiplexer between SATA and PCI Express. The CPUs accept PCI Express connections, not SATA, so there is a conversion that must be made, which is done by the SATA chipset. Each lane on PCI Express 2.0 supports approximately 8GB/s, and PCI Express 3.0 supports approximately 15 GB/s.

Here's the problem I have seen in external expansion slots. They connect 4 SATA slots to a single PCI Express 2.0. So potentially, four connected SATA 6 GB/s drives, or 24 GB/s total I/O throughput, is being processed into a single 5 GB/s connection to the CPU. I don't care how good the SATA chipset is at processing and prioritizing I/O data, you are going to have an I/O bottleneck. Even four SATA 3 GB/s drives create a total of 12 GB/s throughput, more than a single PCI Express 2.0 lane can handle. SSDs can approach speeds greater than 3 GB/s, so it is not a theoretical bottleneck, it is a very real limitation.

So going back to the article. At most, I have seen 4 SATA slots connected to a single PCI Express 2.0 lane. I have seen 6 or 8 connected to either 2 discrete lanes or a 2x lane (or 4x lane when talking about SAS), which carries approximately 10 GB/s of total throughput. So depending on the implementation of the embedded chipset on the motherboard, it may be the PCI Express lanes giving you the throughput limitation and not the SATA chipset. Different ports may be connected to different 1x PCI Express lanes or to a 2x lane, giving you either two discrete paths to the CPU, maximizing throughput, or a larger pipeline to the CPU, which is better than a 1x lane but not nearly as good as discrete pathways.

I have an external PCI Express controller with a few drives on my main system, and when transferring files from drives on the internal (motherboard) chipset to drives on the connected card, there is a noticeable throughput difference.
 
I would like to have seen how CPU speed affects these measurements, if at all. As it is, other than to get off a Marvel controller or upgrade from 3Gb/s to 6Gb/s, there doesn't appear to be a whole lot of difference; some, but not enough to write home about (i.e. to suggest an upgrade).
 
Great article guys. I own an 840 pro myself, and I was wondering why the built-in benchmark numbers weren't as high as what was advertised. Now I know.
 
Looking at the testbed, I see the Intel X-25M G ONE. How the heck did that achieve above 300+MBps doing anyting at all? It's a SATA2.0 device, which is a 3Gbps interface. Your benchmarks are showing 6Gbps scores.
 
Sorry guys, I just need to put in some 'constructive criticism'. This article's last paragraph just sounds so stupid and OBVIOUS that it's like reading an old issue of PC Magazine where the authors are a bunch of old fuddy-duddies who say things that are just too obvious. ALL motherboards today come with built-in SATA ports and nobody who has half a brain will buy a separate PCIe SATA controller to run his SSD or mech HDD. NOBODY! Unless that person has (1) run out of southbridge-provided SATA ports, or (2) he has an old board with old SATA 3Gbps ports and thinks a fancy new SATA 6 PCIe card will be a nice upgrade, or (3) he does have less than half a brain and thinks that a separate SATA controller somehow has some secret sauce that's faster than the motherboard SATA ports, or, lastly, (4) he thinks that the ASMedia controller that also came extra with his board is better than what Intel or AMD came up with. Of these four possibilities, 1 and 2 are probably acceptable, 3 and 4 are stupid scenarios.

No, OF COURSE and OBVIOUSLY you plug devices into the built-in southbridge-connected SATA ports. Anyone who even thinks about installing his own SSD will AUTOMATICALLY do that, not go out and buy a separate SATA controller!
 
Nice review. That's actually a lot of benchmark results to sift through.

This would have been helpful when I was shopping for PCIe controller cards, although I didn't buy it to use with a SSD.
 
This sentence sums it up for me although Intel is the obvious winner. " However, you'd be hard-pressed to tell the difference by walking up to Intel- and AMD-based desktops and trying to tell the difference based on storage performance." I can agree with that being an owner/user of both configurations.
 
Page 7: "though at a significant pentalty"

Regarding the results, I guess I might get a bit of performance boost moving from an AMD 790X to AMD 990X board, which is what I plan to do.
 
This is a review I've been waiting for for a long time. But where is AMD SB850? Why it is omitted? Nevertheless, I would like to see a similar review, but with RAID modes, especially 0 and 1.
 
I have a Z68 chipset, wonder what it's specs are. Still using a 3GB/s SSD though, so until I get a SATA3 SSD, it doesn't matter too much.
 
Status
Not open for further replies.