ME Labs, Inc.
Binary Number SystemMicrocontrollers operate using binary logic. These devices represent values using two voltage levels (0V for logic 0 and +5V for logic 1). With two levels we can represent exactly two different values. These could be any two different values, but by convention we use the values zero and one. These two values correspond to the two digits used by the binary number system. Microcontrollers employ binary because of this correspondence between the logic levels and the two digits used in the binary numbering system.
The binary number system works like the decimal number system with the following exceptions:
Binary Number Formats
We typically write binary numbers as a sequence of bits (bits is short for binary digits). We have defined boundaries for these bits. These boundaries are:
In any number base, we may add as many leading zeroes as we wish without changing its value. However, we normally add leading zeroes to adjust the binary number to a desired size boundary. For example, we can represent the number five as:
We'll number each bit as follows:
Bit zero is usually referred to as the LSB (least significant bit). The left-most bit is typically called the MSB (most significant bit). We will refer to the intermediate bits by their respective bit numbers.
The smallest "unit" of data on a binary computer is a single bit. Since a single bit is capable of representing only two different values (typically zero or one) you may get the impression that there are a very small number of items you can represent with a single bit. Not true! There are an infinite number of items you can represent with a single bit.
With a single bit, you can represent any two distinct items. Examples include zero or one, true or false, on or off, male or female, and right or wrong. However, you are not limited to representing binary data types (that is, those objects which have only two distinct values).
To confuse things even more, different bits can represent different things. For example, one bit might be used to represent the values zero and one, while an adjacent bit might be used to represent the values true and false. How can you tell by looking at the bits? The answer, of course, is that you can't. But this illustrates the whole idea behind computer data structures: data is what you define it to be.
If you use a bit to represent a boolean (true/false) value then that bit (by your definition) represents true or false. For the bit to have any true meaning, you must be consistent. That is, if you're using a bit to represent true or false at one point in your program, you shouldn't use the true/false value stored in that bit to represent red or blue later.
Since most items you will be trying to model require more than two different values, single bit values aren't the most popular data type. However, since everything else consists of groups of bits, bits will play an important role in your programs. Of course, there are several data types that require two distinct values, so it would seem that bits are important by themselves. however, you will soon see that individual bits are difficult to manipulate, so we'll often use other data types to represent boolean values.
A nibble is a collection of bits on a 4-bit boundary. It wouldn't be a particularly interesting data structure except for two items: BCD (binary coded decimal) numbers and hexadecimal (base 16) numbers. It takes four bits to represent a single BCD or hexadecimal digit.
With a nibble, we can represent up to 16 distinct values. In the case of hexadecimal numbers, the values 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F are represented with four bits. BCD uses ten different digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) and requires four bits. In fact, any sixteen distinct values can be represented with a nibble, but hexadecimal and BCD digits are the primary items we can represent with a single nibble.
Without question, the most important data structure used by the microcontroller is the byte. A byte consists of eight bits and is the smallest addressable datum (data item) in the microprocessor.
The bits in a byte are numbered from bit zero (0) through seven (7). The bit positions are shown with their weighted values in the following table:
Bit-0 is the low order bit or least significant bit, bit-7 is the high order bit or most significant bit of the byte. We'll refer to all other bits by their number.
A byte also contains exactly two nibbles. Bits 0 through 3 comprise the low order nibble, and bits 4 through 7 form the high order nibble. Since a byte contains exactly two nibbles, byte values require two hexadecimal digits.
Since a byte contains eight bits, it can represent 28, or 256, different values. Generally, we'll use a byte to represent:
One of the most important uses for a byte is holding a character code. Characters displayed on an LCD, and sent via serial communication all have numeric values. To allow communication with the rest of the world, microcontrollers use the ASCII character set. There are 128 defined codes in the standard ASCII character set. IBM uses the remaining 128 possible values for extended character codes including European characters, graphic symbols, Greek letters, and math symbols.
The bits in a word are numbered from bit zero (0) through fifteen (15). The bit positions are shown with their weighted values in the following table:
Like the byte, bit-0 is the LSB and bit-15 is the MSB. When referencing the other bits in a word use their bit position number.
Notice that a word contains exactly two bytes. Bits 0 through 7 form the low order byte (byte-0), bits 8 through 15 form the high order byte byte-1.
With 16 bits, you can represent 216 (65,536) different values. The major uses for words are:
Number Base Conversion
Binary to Decimal
It is very easy to convert from a binary number to a decimal number. Just like the decimal system, we multiply each digit by its weighted position, and add each of the weighted values together. For example, the binary value 11001010 represents:
(1*27) + (1*26) + (0*25) + (0*24) + (1*23) + (0*22) + (1*21) + (0*20) =
(1 * 128) + (1 * 64) + (0 * 32) + (0 * 16) + (1 * 8) + (0 * 4) + (1 * 2) + (0 * 1) =
128 + 64 + 0 + 0 + 8 + 0 + 2 + 0 =
Decimal to Binary
To convert decimal to binary is slightly more difficult. There are two methods, that may be used to convert from decimal to binary, repeated division by 2, and repeated subtraction by the weighted position value.
Repeated Division By 2
For this method, divide the decimal number by 2, if the remainder is 0, on the side write down a 0. If the remainder is 1, write down a 1. This process is continued by dividing the quotient by 2 and dropping the previous remainder until the quotient is 0. When performing the division, the remainders which will represent the binary equivalent of the decimal number are written beginning at the least significant digit (right) and each new digit is written to more significant digit (the left) of the previous digit. Consider the number 2671.
The Subtraction Method
For this method, start with a weighted position value greater that the number.
This process is continued until the result is 0. When performing the subtraction, the digits which will represent the binary equivalent of the decimal number are written beginning at the most significant digit (the left) and each new digit is written to the next lesser significant digit (on the right) of the previous digit. Consider the same number, 2671, using a different method.
ME Labs, Inc.
2845 Ore Mill Road, STE 4
Colorado Springs CO 80904
(719) 520-1867 fax