A binary system is incapable of representing a 1/3.
Decimal is incapable of representing a 1/3.
If you multiple a 1/3 in either decimal or binary by 3 you will not get 1.
So I did not expect 1.
Where your analysis has gone wrong here is a failure to understand how computers handle floating point numbers.
Really? So which line in the quote you took is incorrect?
It is correct to say that due to binary being unable to represent 1/3, and any number that is not 1/3 multiplied by 3 is not 1. Therefore, logically speaking, binary 1/3 * 3 cannot ever be equal to 1.
When a decimal value is converted to binary, the computer is supposed to choose the bit pattern that most closely approximates the decimal value, i.e. the binary representation that has the least error compared to the input value.
Yes... which was the whole point of my post... and it will NEVER store 1/3 accurately.
Similarly, when a binary value is converted back to decimal, the computer is supposed to produce the decimal string that most closely corresponds to the binary value, again the decimal representation that has the least error compared to the original binary value.
So it may happen, when going through the sequence of operations in your test program, that 1.0000000 represents the binary result of the computation with less error than 0.9999999. In that case, the computer will output 1.0000000.
*This* was where I messed up. I did not expect the value to correct itself on the way back.
But in essence the computer is making 2 mistakes that just happen to cancel out.
1/3 cannot be represented in binary, so the number it does come up with is WRONG.
When it then multiplies that number by 3, the answer should NOT be 1, so that is also WRONG.
This isn't exactly computer specific either. If you try and represent 1/3 in decimal without rounding, you can't do it. Any number you write down to represent that 1/3 will be wrong. If you then multiple that wrong number you picked by three it should never produce 1. If it does, you are wrong.
Unless you round it.
(The fact that you have asked for 20 decimals in the output has just caused the computer to append another 13 or so zeros to the actual answer. It was pretty much as waste of time doing so, as it doesn't change the computation in any way.)
Oh FFS. I had a choice. I could work out how many decimal places the float would store at that magnitude or I could just stick in 20 to be sure it was BEYOND it's limits. Seriously. Typing 20 was a lot faster and I don't think I smoked my CPU making it add a few more zeros. No cores were harmed making it do an extra few dozen instructions.
Also, if it hadn't of produced 1, or if the number happened to be 1 only if it was rounded to the 19th place, I would not have seen it. Of course I could against calculate how many decimal places it can store at that order or magnitude, taking me 10-15 minutes, or I could just type "20".
This was all very clearly illustrated by hans, who did the calculation here:
Anyway, you could do the arithmetic by hand, of course:
1.0 -> exact, exponent=0, mantissa=1.0
0.33 (reoccuring) -> best approximated as exponent=-2, mantissa=1.01010101010101010101011
3.0 -> exact, exponent=1, mantissa=1.1
In order to do the multiplication, you can just multiply both mantissa's, add the exponents and finally normalize the number.
Multiply mantissa; consists of 2 additions: 1.01010101010101010101011 + 0.101010101010101010101011 =~ 10.0
The addition of both exponents -2+1 = -1
We see that the new mantissa is not normalized -> in order to normalize you shift the mantissa 1 to right, add 1 to exponent. Now we have mantissa=1.0 and exponent=0
Which means we get the same exact result back, in this case... I think this is a coincidence.
Yes. In fairness if I had of wanted to do that I would have to dust off a few text books, or just do a bit of google searching, but even as Hans points out it's likely a coincidence.
Do not be fooled by this example giving the correct answer back, the reason I was not expecting it as I know full well how terrible floats and doubles are at storing numbers. Especially when you are working with larger numbers and trying to retain precision. Or when multiplying and dividing things repeatedly. In this case it worked out okay and I didn't expect that. In other examples it will not work out that way. The two wrongs will not make it right.