A bit is the smallest unit in computer science. It is either 1 or 0. The unit abbreviation for bit is 'b'.
A byte is the next largest unit in cs. It is equal to 8 bits, so it can have 256 different values. The unit abbreviation for byte is 'B'.
There are two scales for measuring large amounts of bits and bytes. one is the base 10 scale from SI that we know and love, the other is base 2. The prefixes that we use are, in order from smallest to largest: 'kilo' 'mega' 'giga' 'tera' 'peta'..., and the abbreviations are 'K' 'M' 'G' 'T' 'P' ...
When using the SI scale, there is an order of magnitude difference of 1000 or 10^3. Eg, 1 Mega_ is 1000 Kilo_. When using the base 2 scale, the order of magnitude difference is 2^10, or 1024. Eg 1 Mega_ is 1024 Kilo_.
When it is not clear from context what scale is being used, you can specify base-two by replacing the second syllable of the prefix with 'bi', so the scale becomes 'Kibi' 'Mebi' 'Gibi' 'Tebi' 'Pebi' ..., and the abbreviation become 'Ki' 'Mi' 'Gi' 'Ti' 'Pi' ... These abbreiations aren't widely used, however, and there is no way to specify that you mean to use the base 10 scale.
This isn't a huge issue, though, because aside from large scale storage (anything bigger than RAM) and sometimes networking, base 2 is assumed.
Great post with useful info, but there's a bit more (if you pardon the
pun) one can add to it:
Eight bits in a byte is a "de facto" standard so it is a "typically"
safe assumption on most modern hardware, but the actual definition of
how many bits are in a byte is hardware (and situation) dependent. There
is no definitive or formal standard that defines how many bits are in a
byte. The range is often between 7 to 12 bits per byte, but I think
some really ancient systems (1940's - 1950's) were below 7 bits per byte.
Good point, I should have used octet, and mentioned that a byte is assumed to be equal to an octet unless otherwise stated. I should probably have also called a bit a fundamental unit and mentioned its relationship to bans.
no, i just couldn't think of a relevant use for it, aside from making it easier to talk about hex. not to mention, it's even less of a standard for measuring things than -bi prefixes.
I agree that it's not super useful when comparing storage systems. Though, in debugging a nibble is one hex digit of a byte which is useful. It's also the next largest cs unit. If you would have said the next largest commonly used unit to measure storage it would have been fairly clear. I just wanted to educate the younger readers(probably not you) about another way to look at bytes and bits. Cheers.
A byte is the next largest unit in cs. It is equal to 8 bits, so it can have 256 different values. The unit abbreviation for byte is 'B'.
There are two scales for measuring large amounts of bits and bytes. one is the base 10 scale from SI that we know and love, the other is base 2. The prefixes that we use are, in order from smallest to largest: 'kilo' 'mega' 'giga' 'tera' 'peta'..., and the abbreviations are 'K' 'M' 'G' 'T' 'P' ...
When using the SI scale, there is an order of magnitude difference of 1000 or 10^3. Eg, 1 Mega_ is 1000 Kilo_. When using the base 2 scale, the order of magnitude difference is 2^10, or 1024. Eg 1 Mega_ is 1024 Kilo_.
When it is not clear from context what scale is being used, you can specify base-two by replacing the second syllable of the prefix with 'bi', so the scale becomes 'Kibi' 'Mebi' 'Gibi' 'Tebi' 'Pebi' ..., and the abbreviation become 'Ki' 'Mi' 'Gi' 'Ti' 'Pi' ... These abbreiations aren't widely used, however, and there is no way to specify that you mean to use the base 10 scale.
This isn't a huge issue, though, because aside from large scale storage (anything bigger than RAM) and sometimes networking, base 2 is assumed.