| Electronics > Beginners |
| Why do we use BCD code? (Binary coded Decimal) |
| (1/2) > >> |
| tip.can19:
Hello, I was trying to understand why we use BCD codes? I only understand that instead of operating the whole decimal number, each decimal number is converted to 4-bit or 8-bit binary and then operated on, which might be considered a BCD and easy code to work with. But can you please help me understand any application (with some easy example if possible) or the concept on why we do this? Is there any links to further understand this concept? Please feel free to correct me if my understanding of BCD is itself not correct. Thank you in advance! Regards, |
| Electro Fan:
https://en.m.wikipedia.org/wiki/Binary-coded_decimal BCD is very common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By employing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing to such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to a simpler overall system than converting to and from binary. Most pocket calculators do all their calculations in BCD. The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, smaller code results when representing numbers internally in BCD format, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature BCD arithmetic modes, which assist when writing routines that manipulate BCD quantities.[11][12] |
| Tomorokoshi:
Short answer: BCD encodes 0 through 9 just like binary 0 through 9. Binary A through F are generally undefined. A BCD counter rolls over from 9 to 0. The most familiar example of a BCD computer is the IBM 650. This was a fully-decimal computer, using BCD coding: https://en.wikipedia.org/wiki/IBM_650 One of the key concerns at the beginning was how to deal with precise fractional and decimal values with a binary computer. It's possible to have errors and rounding in binary that are different from what happens with decimal if binary numbers are represented fractionally. Banking and finance had this concern. A short time after the IBM 650, consider the Hewlett Packard 405CR Digital Voltmeter. This used what was known as "Ten-line code", where each digit had its own output wire, so the three digits needed 30 lines. Somewhat later, consider equipment such as the Hewlett Packard 3403C True RMS Voltmeter. On the output connector three digits are encoded on 4-wire BCD, with a single wire for the leading "1", for a total of 13 lines. Quite a bit more efficient than the 405CR. Why do all this? There was no processor in any of this, and only a couple bits of memory outside of the numerical counters. This method was easy to pull directly off of various Nixie or numerical readouts and sent to the output port. A monitoring device, such as a printer, could similarly be interfaced using this method with only circuit controls. No processor and not more than a couple bits of memory needed. |
| atmfjstc:
Think of something like a simple digital clock. Conceptually, you need to store the hour (0-23), minutes (0-59) and seconds (0-59) in some registers, and display the content of the first two at the very least. Now you could use 3 registers store the hour, minutes etc. in binary, and the clock could count time internally just fine. However actually displaying the content of the registers is not simple. Display elements (8-segment LEDs, Nixies etc.) can only accept and display 1 digit, so you need to break up the contents of each register into 2 decimal digits. This means you need some logic to perform binary division by 10, and get both the quotient and the remainder. Standard chips exist that can do this conversion, but that's quite a lot of complexity just for a simple clock. It's much simpler to just store the hours, minutes, etc. already broken up into digits, in BCD form. Then you don't need any sort of conversion logic, the content of the registers goes directly to the corresponding displays. Updating the registers becomes slightly more complicated, but it's very easy to to additions and incrementation on BCD data. You can even do multiplication in BCD digits, and I think there were indeed calculators that did everything 100% in BCD. |
| rjp:
I think of it as a more efficient form of ascii for numerics - you get to store 2 nybbles in 1 byte. doesnt show up much in modern computing, vaguely useful sometimes in telemetary, the hex dumps of the binary blob are human readable without being as inefficient as ascii. |
| Navigation |
| Message Index |
| Next page |