It stems back to machine language, which is binary 0's and 1's or off/on. Each 0 or 1 is a bit, but on order to make use of larger numbers, that was added up to 8 bits, or 1 Byte. 2x2x2 = 8. Same way as ppl use 1k instead of 1000, 1Kb = 1024 bits. The machines don't care, it's just long numbers for them, but to programmers and other ppl reading the numbers, having a figure like 1TB is much easier to read than 2^40 bytes or 1,099,511,627,776 bytes or 8,796,093,022,208 bits.
Could you really imagine going into a store and buying a 8,796,093,022,208 bit HDD? Kinda like asking why do ppl use a $1 bill, not a 100¢ bill. When you start thinking about a $100 bill, would you really want to try paying with a 10,000¢ bill instead?