First of all, gigabit ethernet uses entirely different addressing and encoding from 100-meg, and overhead is one heck of a lot greater than that.
First of all, there's the 10b/8b encoding, so an 8-bit byte is encoded to a 10-bit unit. Then there's a concept of invariable frame sizes, whereit might be possible that a TCP/IP packet spans two frames, filling 100% of the first and 1% of the second, it means 50.5% efficiency. Third, every frame is payload only in part, rest is taken up by header information, footer and CRC. It's not much, perhaps about 5% of the frame, but it can get noticeable.
First, you have to divide by 10, not by 8, to get the speed in bytes/second (ie. 100 MB/s, not 125 MB/s).
Second, if you transmit a lot of inefficient frames (networking programs aren't exactly frugal about bandwidth when they have gigabit ethernet, and next to none are actually optimized in any way for it), you might lose up to half of the bandwidth.
Third, when you factor in the frame level overhead, you might end up with maybe 40-45 MB/s of the promised 100 MB/s...
Fortunately, a lot of these issues can be resolved by optimizing software and firmware to take advantage of the available bandwidth and idiosyncracies of gigabit ethernet.