Clockman is more or less right, but a few tweeks..
In a Raid 5, lets look at a 3 disk array..
When you read, you read off two disks for a certain block of data, as the third disk holds tha parity data for that block.
Your access time till the start of the read is thus the slowest of the two drives, and your maximum throughput is the sum of the two drives.
In a 4 disk RAID5 you have three disks with a block of data, and the 4th holding the parity data. As a result since you are waiting for the slowest of the three drives holding data for the read to start, you will have slighly worse latency or access time versus a three disk array, but will, on average have better sustained throughput as you read off three drives versus 2 for any block of data. (You actually read off all the drives in an array, but the is for each individual block, the parity data is allocated to different disks with each block)
The same logic applies when writing. In general, the more disks in an array the worse the latency, but the better the sustained throughput.
In theory, as clockman suggests, you could have one drive that was slower (as in rotational speed, interface, design, platter density, whatever) than the others but almost always you use identical (or at very least very very similar) drives in an array. - Most SCSI controllers for example will spin down all the drives to the slowest drive speed if you use mismatched drives.
Dedicated controllers historically were almost always faster than software RAIDs, but with all these Quad cores at $270 a pop coming out, it would be interesting to see a ICH9 versus a dedicated array controller...