Time for another computer rant. In computers the amount or size of data is often times measured as either bits or bytes. A "bit" is a single piece of data - either a 0 or a 1, binary, on or off, true or false, etc. It is the smallest size of data you can have. But it's not very useful as you can only store 2 values, 0 or 1. But if you string multiple bits together you can store increasingly larger pieces of data. A "byte" is 8 bits together. Whereas a bit is 2 values, a byte is 256 values. Bytes, just like bits, can be strung together to store larger blocks of data.
What really annoys me is when companies insist on using bits instead of bytes. And they do it for one simple reason, it makes there product/service sound better. After all what sounds better, DSL at 200 Kbps or DSL at 1600 KBps? USB at 480 Mbps or 5000 Mbps?
So why does this bother me so much? Simple, a bit might be the smallest representation of data, but in modern computers everything is byte orientated. It is impossible to have anything other than even number of bytes. You cannot have a file that is 1 bit in size. You cannot transmit 8001 bits of data over the network. Everything MUST be an even number of bytes - period. So anytime you see something expressed in bits they are doing it for one reason - someone in marketing is trying to trick you into thinking you're getting more than you really are. So the next time you see something represented in bits, someone in marketing just insulted your intelligence and you should be offended.
No comments:
Post a Comment