EEVblog Electronics Community Forum
Electronics => Beginners => Topic started by: dentaku on January 09, 2014, 11:50:41 pm

I'm wondering why BCD is organized the way it is.
Why isn't the number one 1000 instead of 0001 and two is 0010 and not 0100 etc?
In other words why are the bits ordered 8421 and not 1248?

probably the same reason the decimal number 192 is written as 192 rather than 291

Binary: 8 4 2 1
Decimal: 1000 100 10 1

MSB/MSD comes first in most number systems...

Why isn't the number one 1000 instead of 0001 and two is 0010 and not 0100 etc?
I suppose for the same reason that we order decimal numbers the same way: 1000s, 100s, 10s, 1: 8s, 4s, 2s, 1s in binary

elementary my dear Watson elementary! :/O
as explained above.

google Endianness and you will find it, it can go either way  depends on how the hardware is designed

Well in computing it can go either way, and it often will, even within the same system.
Intel CPUs as an example are "little endian" meaning that the bit with the lowest address is the least significant digit. So, in your example, this would be 1248.
ARM CPUs were bigendian (8421) but as of version 3 they are now biendian, meaning that you can control the endianness via software configuration to suit your own needs. Very handy.
When you design hardware that is implemented on an FPGA, it is very easy to swap endianness or to use little endian everywhere when you wanted big endian.
In short, both types are used all over the freaking place in digital systems.

EDIT: Ah, I thought so.
Doesn't that refer to littleendian vs bigendian? Or is that something different?

I'm wondering why BCD is organized the way it is.
Why isn't the number one 1000 instead of 0001 and two is 0010 and not 0100 etc?
0001 means we don't care about how many pointless bits before the first 1 there are, and we don't know how many bits wide the field is (only how many actually useful bits were given), this gives more efficient operations, consider a shift register with reset:
To represent 1 as "1000" is RESETShift1Shift0Shift0Shift0Latch
To represent 1 as "0001" is RESETShift1Latch
(well, of course, that depends on how you take the data out of the register, but at least in my mind, it always goes "LSB" as the last shift).

Great, I've heard about LSB and endianness all my life but It was probably the 90's since I ever paid attention to what it meant.
I can see how it's like decimal numbers with the 1000's first then the 100's then the 10's then 1's when you have a 4 digit number.

Endianess refers to the BYTE order. The BITS in a byte remain MSBLSB in all systems.

Endianess refers to the BYTE order. The BITS in a byte remain MSBLSB in all systems.
I always thought it referred to the state of being of an Existentialist Native American! ;D

hi,
yesterday my friend told me that he use BCD sequence like : 20108421 .
i don't remember there any bigger number than 8 in BCD in the text book?
could some body explanin? or even use it?
or he just make a mistake...

yesterday my friend told me that he use BCD sequence like : 20108421 .
i don't remember there any bigger number than 8 in BCD in the text book?
could some body explanin? or even use it?
or he just make a mistake...
Yup, probably a small glitch in his brain .. happens :)

yesterday my friend told me that he use BCD sequence like : 804020108421 .
i don't remember there any bigger number than 8 in BCD in the text book?
could some body explanin? or even use it?
That's the tens digit.
BCD is about coding decimal digits into binary code, hence the name (binary coded decimal). If you have a two digit decimal number, you need 8 BCD bits.

so the number will continuous to ..... 800400200100804020108421 ?
ow...
i think not many people use the BCD . when i Google the two digit BCD , i haven't find any clue

yesterday my friend told me that he use BCD sequence like : 20108421 .
BCD is encoded with 4 bits typically, since there are only 10 decimal digits, that's all we need.
BCD is simply taking each digit of a decimal number, encoding to binary, and there you go:
0 = 0000
1 = 0001
2 = 0010
...
9 = 1001
That's all you need. However some variations might further "characters", perhaps you want to add a decimal point, letters, some mathematical symbols, in which case another bit or two might be useful as an extension (if you need more than 6 extra characters anyway).
The Max7219 7segment driver for example provides some extra characters in its BCD decoding, although it just fits them into the 6 slots after 1001 without requiring a 5th bit.

so the number will continuous to ..... 800400200100804020108421 ?
Not really no. Each digit is encoded as a bit sequence separately.
Eg, the number 1345 is encoded to the 4 binary sequences
0001
0011
0100
0101
(if those 4 digit binary numbers are packed into 2 bytes when sending or ... is not important for the discussion)
The wikipedia article is a good place for you: http://en.wikipedia.org/wiki/Binarycoded_decimal#Basics (http://en.wikipedia.org/wiki/Binarycoded_decimal#Basics)

There was that other one i cant think of its name right now, that was similar to bcd but altered so that only one bit changed on each increment

That's the Gray Code (http://en.wikipedia.org/wiki/Gray_code). Used a lot with rotary encoders and similar devices.

so the number will continuous to ..... 800400200100804020108421 ?
Not really no. Each digit is encoded as a bit sequence separately.
Well, you would in term split them, but they kind of are together when you convert them.
This (http://people.ee.duke.edu/~dwyer/courses/ece52/Binary_to_BCD_Converter.pdf) is how you convert a binary number into a set of BCD digits. As you can see, the result at the bottom is a long sequence of bits which you then in term split into fours.

Well in computing it can go either way, and it often will, even within the same system.
Intel CPUs as an example are "little endian" meaning that the bit with the lowest address is the least significant digit. So, in your example, this would be 1248.
ARM CPUs were bigendian (8421) but as of version 3 they are now biendian, meaning that you can control the endianness via software configuration to suit your own needs. Very handy.
When you design hardware that is implemented on an FPGA, it is very easy to swap endianness or to use little endian everywhere when you wanted big endian.
In short, both types are used all over the freaking place in digital systems.
Did you mean SPARC? ARM is little endian by default. SPARC started as big and became bi, ARM started as little and became bi.

Aren't modern computer engineers spoilt today!
Wasn't it the TI 9900 that used bit 0 as the most significant bit. What a barmy idea that was, trying to concatenate words to make larger integers.
Mind you, that was better than the computers that had bit 15 as a parity bit, so concatenating had to include shifting and masking, ughh!

Or processors that have a binary and a decimal mode, and if you leave it in decimal mode you get some really weird problems on things like conditional branches and carries.