Author Topic: Why do we use BCD code? (Binary coded Decimal)  (Read 8305 times)

0 Members and 1 Guest are viewing this topic.

Offline tip.can19Topic starter

  • Contributor
  • Posts: 10
  • Country: ca
  • Researcher
Why do we use BCD code? (Binary coded Decimal)
« on: December 03, 2019, 05:41:26 am »
Hello,

I was trying to understand why we use BCD codes? I only understand that instead of operating the whole decimal number, each decimal number is converted to 4-bit or 8-bit binary and then operated on, which might be considered a BCD and easy code to work with. But can you please help me understand any application (with some easy example if possible) or the concept on why we do this? Is there any links to further understand this concept?

Please feel free to correct me if my understanding of BCD is itself not correct.

Thank you in advance!

Regards,

Thanks
Tip
 

Offline Electro Fan

  • Super Contributor
  • ***
  • Posts: 3317
Re: Why do we use BCD code? (Binary coded Decimal)
« Reply #1 on: December 03, 2019, 05:56:33 am »
https://en.m.wikipedia.org/wiki/Binary-coded_decimal

BCD is very common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By employing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing to such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to a simpler overall system than converting to and from binary. Most pocket calculators do all their calculations in BCD.

The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, smaller code results when representing numbers internally in BCD format, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature BCD arithmetic modes, which assist when writing routines that manipulate BCD quantities.[11][12]
 
The following users thanked this post: tip.can19

Offline Tomorokoshi

  • Super Contributor
  • ***
  • Posts: 1212
  • Country: us
Re: Why do we use BCD code? (Binary coded Decimal)
« Reply #2 on: December 03, 2019, 06:14:50 am »
Short answer:

BCD encodes 0 through 9 just like binary 0 through 9. Binary A through F are generally undefined. A BCD counter rolls over from 9 to 0.

The most familiar example of a BCD computer is the IBM 650. This was a fully-decimal computer, using BCD coding:
https://en.wikipedia.org/wiki/IBM_650

One of the key concerns at the beginning was how to deal with precise fractional and decimal values with a binary computer. It's possible to have errors and rounding in binary that are different from what happens with decimal if binary numbers are represented fractionally. Banking and finance had this concern.

A short time after the IBM 650, consider the Hewlett Packard 405CR Digital Voltmeter. This used what was known as "Ten-line code", where each digit had its own output wire, so the three digits needed 30 lines.

Somewhat later, consider equipment such as the Hewlett Packard 3403C True RMS Voltmeter. On the output connector three digits are encoded on 4-wire BCD, with a single wire for the leading "1", for a total of 13 lines. Quite a bit more efficient than the 405CR.

Why do all this? There was no processor in any of this, and only a couple bits of memory outside of the numerical counters. This method was easy to pull directly off of various Nixie or numerical readouts and sent to the output port. A monitoring device, such as a printer, could similarly be interfaced using this method with only circuit controls. No processor and not more than a couple bits of memory needed.
 
The following users thanked this post: tip.can19

Offline atmfjstc

  • Regular Contributor
  • *
  • Posts: 121
  • Country: ro
Re: Why do we use BCD code? (Binary coded Decimal)
« Reply #3 on: December 03, 2019, 06:15:34 am »
Think of something like a simple digital clock. Conceptually, you need to store the hour (0-23), minutes (0-59) and seconds (0-59) in some registers, and display the content of the first two at the very least.

Now you could use 3 registers store the hour, minutes etc. in binary, and the clock could count time internally just fine. However actually displaying the content of the registers is not simple. Display elements (8-segment LEDs, Nixies etc.) can only accept and display 1 digit, so you need to break up the contents of each register into 2 decimal digits. This means you need some logic to perform binary division by 10, and get both the quotient and the remainder. Standard chips exist that can do this conversion, but that's quite a lot of complexity just for a simple clock.

It's much simpler to just store the hours, minutes, etc. already broken up into digits, in BCD form. Then you don't need any sort of conversion logic, the content of the registers goes directly to the corresponding displays. Updating the registers becomes slightly more complicated, but it's very easy to to additions and incrementation on BCD data. You can even do multiplication in BCD digits, and I think there were indeed calculators that did everything 100% in BCD.
 
The following users thanked this post: tip.can19

Offline rjp

  • Regular Contributor
  • *
  • Posts: 124
  • Country: au
Re: Why do we use BCD code? (Binary coded Decimal)
« Reply #4 on: December 03, 2019, 06:55:35 am »
I think of it as a more efficient form of ascii for numerics  - you get to store 2 nybbles in 1 byte.

doesnt show up much in modern computing, vaguely useful sometimes in telemetary, the hex dumps of the binary blob are human readable without being as inefficient as ascii.
 
The following users thanked this post: tip.can19

Offline ggchab

  • Frequent Contributor
  • **
  • Posts: 283
  • Country: be
Re: Why do we use BCD code? (Binary coded Decimal)
« Reply #5 on: December 03, 2019, 09:46:30 am »
I think there were indeed calculators that did everything 100% in BCD.

HP calculators used BCD: https://www.hpmuseum.org/techcpu.htm
 
The following users thanked this post: tip.can19

Offline Brumby

  • Supporter
  • ****
  • Posts: 12413
  • Country: au
Re: Why do we use BCD code? (Binary coded Decimal)
« Reply #6 on: December 03, 2019, 10:13:51 am »
The above answers that delve into computing examples are overthinking.  It comes down to simplicity in design for a circuit that performs a dedicated function - like an event counter ... or a clock.

It's much simpler to just store the hours, minutes, etc. already broken up into digits, in BCD form. Then you don't need any sort of conversion logic, the content of the registers goes directly to the corresponding displays.
 
The following users thanked this post: tip.can19

Online forrestc

  • Supporter
  • ****
  • Posts: 720
  • Country: us
Re: Why do we use BCD code? (Binary coded Decimal)
« Reply #7 on: December 03, 2019, 10:33:46 am »
I was trying to understand why we use BCD codes? I only understand that instead of operating the whole decimal number, each decimal number is converted to 4-bit or 8-bit binary and then operated on, which might be considered a BCD and easy code to work with. But can you please help me understand any application (with some easy example if possible) or the concept on why we do this? Is there any links to further understand this concept?

Please feel free to correct me if my understanding of BCD is itself not correct.

BCD largely means that each digit in a number is coded into 4 bits, usually two digits in a 8 bit byte.

As you can probably infer from most of the answers, BCD isn't really commonly used today due to the large computing power available even in the lowest cost processors.   In the days where BCD was prevalent, it was far less computationally expensive to take a number in and store it in BCD, do all your math in BCD, and then output each digit from BCD.   Nowadays it doesn't really matter.

As an example, imagine adding the numbers 45 and 12, entered as a string.   Without BCD, you'd enter 45, the computer would convert the '4' in ascii to the number 4, multiply it by 10, then convert the '5' to the number 5, and add it to the previous result.  This would be repeated with 12.   So you'd end up with binary numbers 45 and 12.  You can then simply add the binary numbers.   To get it back out you have to take the result, divide it by 10, giving you 5 (integer math).   Then convert the '5' to ascii and output it.  Then you have to take the 5, multiply it by 10, subtract this result (50) from the result of the addition (57), giving you 7, then convert the 7 to ascii and output it.

Note that multiply and divide by 10 were fairly expensive things to do, generally taking a lot of time (relatively).   This is because most of the early processors didn't natively do multiply, so it had to be done with some sort of multiplication routine.  If you knew you were dividing or multiplying by 10 you could probably optimize a routine to do this, but you were still looking at a lot of processor cycles to do so.

For comparison sake, let's just say that with older processors, 100 or so instruction cycles wouldn't be unreasonable to expect for this operation.   This might be a bit off (or even way low), but it's good enough for this discussion.

If you were doing the same thing with BCD, it would be much simpler:   You'd take the first digit from your string, '4', convert it to the digital number 40, with a single subtraction.   Convert the '5' to the number 5 (another subtraction), then bitwise-or it with the existing value.  Depending on the number of registers this is probably about a 3-4 instruction operation so far.   Repeat with the second number.   The addition can either be done without any additional effort with some processors (which handle BCD automatically), or sometimes it's 3-4 instructions to do BCD if the processor doesn't natively support it.  Once you have the result, outputting it is easy.   You take the result, convert the top 4 bits to the first character and output it, and convert the bottom 4 bits to the second character and output it.   Probably another 3-4 instructions.

So you have for the BCD implementation around 15 instructions.   A clever assembly programmer might even be able to trim this down to 10 or so.

With modern CPU's which run billions of instructions a second, the added overhead isn't a big issue, especially since you're not really outputting characters anymore, but drawing them on the screen.   So the actual math to strip out each digit is rather simple in comparison.     It is also far more efficient to multiply and divide in modern CPU's and with code generated by modern compilers, as the CPU often has instructions which will do a very wide multiply in a couple of cycles. 

But back when BCD was big, computers were lucky to run a million instructions per second, with 500,000 p/s being pretty common, so using an efficient coding like BCD was a big deal.   There are still places where BCD is probably still more efficient, usually in embedded and small processor applications - such as when writing code which will run on a ten cent processor which runs at 32khz for some sort of cost critical application.   Or something driving a display.   And so on.   But in most cases, it's pretty much not used anymore.
 
The following users thanked this post: tip.can19

Offline tip.can19Topic starter

  • Contributor
  • Posts: 10
  • Country: ca
  • Researcher
Re: Why do we use BCD code? (Binary coded Decimal)
« Reply #8 on: December 04, 2019, 02:56:13 am »
Thank you guys for the time and efforts you took to explain this simple concept. I now see BCD's role much better and clear. The examples especially by @Tomorokoshi @atmfjstc and @forrestc are very detailed and simple to understand and I really appreciate it! Actually, I will read further on those examples, dig a little deep and maybe try to implement those above with some simple RTL test-cases.

I too did try some simple RTL in verilog to know BCD conversion better. The simulations made me understand a bit more.

Regards,





Thanks
Tip
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf