Binary Bits and Bytes: Difference between revisions

m
no edit summary
mNo edit summary
mNo edit summary
 
Line 6:
A byte is a set of eight bits. Why not ten, since we count in base ten? In the early days of computing, bits were handled in 'words', and each computer had its own word length. Some computers, especially military ones, could have decidedly odd word lengths, like 21 or 37 bits, and smaller processors and microcontrollers could have a sub-byte word lengths -- for example, Intel 4004, the very first microprocessor ever, had a word that was 4 bits long. It was the IBM System/360 mainframe of 1964 that standardized the 8-bit byte, and words constructed out of bytes. This resolved a conflict between processing, where a longer word is more efficient, and addressing, where a shorter word is more efficient. Each address in the 360 corresponded to one byte, but it processed data in words that were four bytes (32 bits) long.
 
A byte has 2^8, or 256 unique combinations to work with. For example, a single byte can represent any integer between 0 and 255, or between -128 and +127 if you use one of the bits to indicate whether the number is positive or negative (2^7 = 128). The maximum representable number is always [[Powers of Two Minus One|one less than two to the power of the number of bits]], because zero takes up one combination of all the possible bit permutations.
 
So, what does that 8-bit, 16-bit, 32-bit, or 128-bit on a gaming system mean?