People these days are made to think that "bandwidth" is synonymous with "bits per second." But really the bps is just a way we measure bandwidth when speaking in a digital data sense. I saw someone ask this question somewhere else and the result was most everyone confidently suggesting one of two things:
1) the ISP counts every bit you send to ensure that you can't transfer more than you pay for, and that as you go over your limit, they just start tossing the extra packets.
This one suggests that the ISP pays for a lot of overhead of monitoring and controlling you.
2) if a theoretical wire is capable of 10Mbps, and you only pay for 5Mbps, then they use a timer to reduce the speed of your transfer by half.
This one suggests that a ping on this theoretical 5Mbps connection would literally take twice the time.
(Heads up: I'm far from a pro, and magic of networking has always mystified me. I don't claim to know any of this, it's just how I assume it works and I could be 100% wrong)
Without going into too much detail, here's my best assumption:
The term "bandwidth" refers to a range of frequencies. When you buy more bandwidth, it's just the ISP telling that end-of-the-street box to assign you more frequencies to communicate on. The more frequencies you have - the more data you can send at once (thanks to Fourier). That box will listen, process, and send every single bit of data you send it using your assigned frequencies. Basically, they give you resources and you use them. If you use up all your frequencies then your modem just queues up the data you're sending and sends it when it can.
That would mean there's (theoretically) no need to monitor every single bit you transfer and there's definitely no need to slow your packets.
There seems to be an emphasis these days on software-controlled things rather than hardware-controlled things, and that's understandable in a lot of situations. So I could be totally wrong.
I'm always up for learning the truth behind how these magical computers/networks work.
Thanks for any input!
1) the ISP counts every bit you send to ensure that you can't transfer more than you pay for, and that as you go over your limit, they just start tossing the extra packets.
This one suggests that the ISP pays for a lot of overhead of monitoring and controlling you.
2) if a theoretical wire is capable of 10Mbps, and you only pay for 5Mbps, then they use a timer to reduce the speed of your transfer by half.
This one suggests that a ping on this theoretical 5Mbps connection would literally take twice the time.
(Heads up: I'm far from a pro, and magic of networking has always mystified me. I don't claim to know any of this, it's just how I assume it works and I could be 100% wrong)
Without going into too much detail, here's my best assumption:
The term "bandwidth" refers to a range of frequencies. When you buy more bandwidth, it's just the ISP telling that end-of-the-street box to assign you more frequencies to communicate on. The more frequencies you have - the more data you can send at once (thanks to Fourier). That box will listen, process, and send every single bit of data you send it using your assigned frequencies. Basically, they give you resources and you use them. If you use up all your frequencies then your modem just queues up the data you're sending and sends it when it can.
That would mean there's (theoretically) no need to monitor every single bit you transfer and there's definitely no need to slow your packets.
There seems to be an emphasis these days on software-controlled things rather than hardware-controlled things, and that's understandable in a lot of situations. So I could be totally wrong.
I'm always up for learning the truth behind how these magical computers/networks work.
Thanks for any input!