Author Topic: Wasn't expecting this... C floating point arithmetic  (Read 14212 times)

0 Members and 1 Guest are viewing this topic.

Online paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4046
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #25 on: April 02, 2018, 02:48:58 pm »
As an aside... "bc" the linux utility, uses fixed precision.

If you execute:

scale=30000;  (1/3)*3

You get 0 followed by 30,000 9s.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3640
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #26 on: April 02, 2018, 04:05:24 pm »
Of course, we all (should) know that 0.999... is equal to 1.0. I believe that bc uses a decimal representation, which (like binary) has no finite representation for a third.
 

Offline ogden

  • Super Contributor
  • ***
  • Posts: 3731
  • Country: lv
Re: Wasn't expecting this... C floating point arithmetic
« Reply #27 on: April 02, 2018, 04:48:48 pm »
Better test to understand "problem" (which I do not see as such) would be this one:

#include <stdio.h>
 
int main() {
double q = 1.0;
float z;

  do {
    q = q + 0.0000000000001;
    z = q;
  } while (z == 1.0);
 
  printf( "q = %.20f\n", q );
  printf( "z = %.20f\n", z );

}


I do not show result for a reason. When you see result of the code above, you supposedly will understand result of original code. Actually knowing that std precision float have 24 significant bits, is enough :)
« Last Edit: April 02, 2018, 05:05:40 pm by ogden »
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19487
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Wasn't expecting this... C floating point arithmetic
« Reply #28 on: April 02, 2018, 05:22:54 pm »
There is a fundamental flaw apparent here: confusion of arithmetic in computers with arithmetic in maths. The two are different.

A good starting point for the the theory and practice of computer arithmetic is http://people.ds.cam.ac.uk/nmm1/Arithmetic/index.html
Quote
How Computers Handle Numbers.
This could be called "Computer Arithmetic Uncovered". It covers everything that a scientific programmer needs to know about basic arithmetic, for most of the commonly used scientific languages and several applications. Most of what it was true and relevant in 1970, and will probably be so in 2070. It describes how computers store and process integers and floating point numbers (real and complex), the exceptions that might arise and what they mean. The intent is to describe how to get reliable answers for a reasonable amount of effort, and be able to understand strange results and effects

Maclaren has been at the sharp end of many such problems - and how to avoid them.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: agehall

Offline AlfBaz

  • Super Contributor
  • ***
  • Posts: 2184
  • Country: au
Re: Wasn't expecting this... C floating point arithmetic
« Reply #29 on: April 02, 2018, 07:14:34 pm »
Haven't read the thread in detail so I probably shouldn' t post but any ways.

String literals are promoted to doubles so although you declare a variable as float (single precision) and assign it to a string literal it will get treated as a double during the calculation (ie double lib funcs called) with the result truncated and stored in your float variable. If you want only floating point calculations add f to your string literals, for example

    float a = (1/3.0)f;
    float b = a*3f;
 

Offline iMo

  • Super Contributor
  • ***
  • Posts: 4780
  • Country: pm
  • It's important to try new things..
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #31 on: April 02, 2018, 09:07:52 pm »
String literals are promoted to doubles so although you declare a variable as float (single precision) and assign it to a string literal it will get treated as a double during the calculation (ie double lib funcs called) with the result truncated and stored in your float variable. If you want only floating point calculations add f to your string literals, for example

    float a = (1/3.0)f;
    float b = a*3f;
I'd call those float literals, not string literals. (I would call "foo" or "3" a string literal, but not a bare 3 in code.)

Neither of those is valid c++, at least according to gcc.
These would be:

    float a = (1/3.0f);
    float b = a*3.0f;

You can't "f" the parenthsized expression. You have to "f" the float constant.
(I didn't realize this until I tried it, but) You can't "f" a decimal constant, at least not in gcc.
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1638
  • Country: nl
Re: Wasn't expecting this... C floating point arithmetic
« Reply #32 on: April 02, 2018, 09:18:09 pm »
I did all of this in uni, right down to doing floating point calculations with a pen and paper under exam conditions.  That was well nearly 20 years ago. There is a difference between understanding something and remembering all the details.   The details are not something I use everyday, so I do not "store" them in my head.  The implications of floating point precision loss I DO use on a daily basis, so I at least remember to be careful with them and write routines to maximise precision.  Should I ever actually need to work out the details of a floating point operation I certainly can.  I choose not to for this post as.. it would be mathurbation.

I'm still amid all those exams, which is perhaps why I went straight to the bit level approach, and very well explains the different approaches to this problem. I can't blame anyone for that. The computations fundamentally are not very hard, just tedious, which is why most won't (and shouldn't) bother.

#21 explains the problem quite well with the decimal example.
In contrast: I'm pretty sure one could also design a floating point unit with radix 3. Then representing 1/3 becomes trivial. But why would anyone do that in a computer? It makes no sense.
Just like floating slash representations are not used in computers. Could also do the trick. For all we know, they could be used extensively in some ASICs that need to some very niche calculation at a high rate, but an average engineer will never see those.. (and even then designing completely customized arithmetic units is probably quite unusual)

I think it is more important to understand what happens in a floating point unit, that floats are not perfect, which phenomena may happen, and how to battle them.
 

Offline DBecker

  • Frequent Contributor
  • **
  • Posts: 326
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #33 on: April 02, 2018, 09:50:41 pm »
Haven't read the thread in detail so I probably shouldn' t post but any ways.

String literals are promoted to doubles so although you declare a variable as float (single precision) and assign it to a string literal it will get treated as a double during the calculation (ie double lib funcs called) with the result truncated and stored in your float variable. If you want only floating point calculations add f to your string literals, for example

    float a = (1/3.0)f;
    float b = a*3f;


This is the correct answer.

It might seem to be a little close to weasel-language-lawyer details, but C is specified to allow the compiler to do a substantial amount of arithmetic at compile time.  Calculations at compile time must be at least to the resolution/range/precision of the run-time, and are allowed to be arbitrarily more precise/correct.

Even at run time, intermediate results may be kept in a higher precision/resolution format.  The best-known example is Intel co-processors doing IEEE 80 bit floating point, converting from and to the in-memory format (32 or 64 bits) only when loading or storing.

Allow the compiler and run time to do this has a substantial positive effect on performance, code size, and even correctness.  The latter when there are layers of macros with scaling and offsets that might over- or under-flow if the operations were done naively.
 
« Last Edit: April 02, 2018, 10:02:01 pm by DBecker »
 

Offline Naguissa

  • Regular Contributor
  • *
  • Posts: 114
  • Country: es
    • Foro de electricidad, electrónica y DIY / HUM en español
Re: Wasn't expecting this... C floating point arithmetic
« Reply #34 on: April 02, 2018, 09:56:12 pm »
Compiler optimizes code by default, you need to tell it to not do so if you want.

Compiler sees:

x/3
x*3

So it assumes by default x.

This is easily verified by reading the output listing.

As shown in #8, at -O0, it performs the operations exactly as written.  There can be no confusion, what the compiler is trying to do in this case.  :)

Tim
You are right, i didn't see that message until now... (on mobile)

Enviado desde mi Jolla mediante Tapatalk


Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #35 on: April 02, 2018, 10:07:39 pm »
Quote
0.33333334326744079590 * 3 = 1.00000002980232238770
Yeah, fine.   And then you store 1.00000002980232238770 in a 32bit float, and you get 1.000000 because the least significant bits disappear.  What's the problem?

This has nothing to do with compiler optimization or "cleverness."   Before you make accusations like that, you should write code that the compiler CAN'T optimize, and see if you get the same results.

Code: [Select]
#include <stdio.h>
 
int main() {
    float divisor, multiplier;
    scanf("%f %f", &divisor, &multiplier);
    float a = (1/divisor);
    float b = a*multiplier;
    printf( "%.20f * %f = %.20f\n", a, multiplier, b  );
}
Quote
./a.out
3.0 3.0
0.33333334326744079590 * 3.000000 = 1.00000000000000000000
« Last Edit: April 02, 2018, 10:09:13 pm by westfw »
 

Offline IanB

  • Super Contributor
  • ***
  • Posts: 11876
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #36 on: April 03, 2018, 01:43:21 am »
A binary system is incapable of representing a 1/3.

Decimal is incapable of representing a 1/3.

If you multiple a 1/3 in either decimal or binary by 3 you will not get 1.

So I did not expect 1.

Where your analysis has gone wrong here is a failure to understand how computers handle floating point numbers.

When a decimal value is converted to binary, the computer is supposed to choose the bit pattern that most closely approximates the decimal value, i.e. the binary representation that has the least error compared to the input value.

Similarly, when a binary value is converted back to decimal, the computer is supposed to produce the decimal string that most closely corresponds to the binary value, again the decimal representation that has the least error compared to the original binary value.

So it may happen, when going through the sequence of operations in your test program, that 1.0000000 represents the binary result of the computation with less error than 0.9999999. In that case, the computer will output 1.0000000.

(The fact that you have asked for 20 decimals in the output has just caused the computer to append another 13 or so zeros to the actual answer. It was pretty much as waste of time doing so, as it doesn't change the computation in any way.)

This was all very clearly illustrated by hans, who did the calculation here:

Anyway, you could do the arithmetic by hand, of course:
1.0 -> exact, exponent=0, mantissa=1.0
0.33 (reoccuring) -> best approximated as exponent=-2, mantissa=1.01010101010101010101011
3.0 -> exact, exponent=1, mantissa=1.1

In order to do the multiplication, you can just multiply both mantissa's, add the exponents and finally normalize the number.
Multiply mantissa; consists of 2 additions: 1.01010101010101010101011 + 0.101010101010101010101011 =~ 10.0
The addition of both exponents -2+1 = -1
We see that the new mantissa is not normalized -> in order to normalize you shift the mantissa 1 to right, add 1 to exponent. Now we have mantissa=1.0 and exponent=0
Which means we get the same exact result back, in this case... I think this is a coincidence.
 

Online paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4046
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #37 on: April 03, 2018, 03:12:06 am »
A binary system is incapable of representing a 1/3.

Decimal is incapable of representing a 1/3.

If you multiple a 1/3 in either decimal or binary by 3 you will not get 1.

So I did not expect 1.

Where your analysis has gone wrong here is a failure to understand how computers handle floating point numbers.


Really?  So which line in the quote you took is incorrect?

It is correct to say that due to binary being unable to represent 1/3, and any number that is not 1/3 multiplied by 3 is not 1.  Therefore, logically speaking, binary 1/3 * 3 cannot ever be equal to 1.

Quote
When a decimal value is converted to binary, the computer is supposed to choose the bit pattern that most closely approximates the decimal value, i.e. the binary representation that has the least error compared to the input value.

Yes... which was the whole       point of my post... and it will NEVER store 1/3 accurately.

Quote
Similarly, when a binary value is converted back to decimal, the computer is supposed to produce the decimal string that most closely corresponds to the binary value, again the decimal representation that has the least error compared to the original binary value.

So it may happen, when going through the sequence of operations in your test program, that 1.0000000 represents the binary result of the computation with less error than 0.9999999. In that case, the computer will output 1.0000000.

*This* was where I messed up.  I did not expect the value to correct itself on the way back.

But in essence the computer is making 2 mistakes that just happen to cancel out.

1/3 cannot be represented in binary, so the number it does come up with is WRONG.

When it then multiplies that number by 3, the answer should NOT be 1, so that is also WRONG.

This isn't exactly computer specific either.  If you try and represent 1/3 in decimal without rounding, you can't do it.  Any number you write down to represent that 1/3 will be wrong.  If you then multiple that wrong number you picked by three it should never produce 1.  If it does, you are wrong.

Unless you round it.

Quote
(The fact that you have asked for 20 decimals in the output has just caused the computer to append another 13 or so zeros to the actual answer. It was pretty much as waste of time doing so, as it doesn't change the computation in any way.)

Oh FFS.  I had a choice.  I could work out how many decimal places the float would store at that magnitude or I could just stick in 20 to be sure it was BEYOND it's limits.  Seriously.  Typing 20 was a lot faster and I don't think I smoked my CPU making it add a few more zeros.  No cores were harmed making it do an extra few dozen instructions.

Also, if it hadn't of produced 1, or if the number happened to be 1 only if it was rounded to the 19th place, I would not have seen it.  Of course I could against calculate how many decimal places it can store at that order or magnitude, taking me 10-15 minutes, or I could just type "20".

Quote
This was all very clearly illustrated by hans, who did the calculation here:

Anyway, you could do the arithmetic by hand, of course:
1.0 -> exact, exponent=0, mantissa=1.0
0.33 (reoccuring) -> best approximated as exponent=-2, mantissa=1.01010101010101010101011
3.0 -> exact, exponent=1, mantissa=1.1

In order to do the multiplication, you can just multiply both mantissa's, add the exponents and finally normalize the number.
Multiply mantissa; consists of 2 additions: 1.01010101010101010101011 + 0.101010101010101010101011 =~ 10.0
The addition of both exponents -2+1 = -1
We see that the new mantissa is not normalized -> in order to normalize you shift the mantissa 1 to right, add 1 to exponent. Now we have mantissa=1.0 and exponent=0
Which means we get the same exact result back, in this case... I think this is a coincidence.

Yes.  In fairness if I had of wanted to do that I would have to dust off a few text books, or just do a bit of google searching, but even as Hans points out it's likely a coincidence.

Do not be fooled by this example giving the correct answer back, the reason I was not expecting it as I know full well how terrible floats and doubles are at storing numbers.  Especially when you are working with larger numbers and trying to retain precision.  Or when multiplying and dividing things repeatedly.  In this case it worked out okay and I didn't expect that.  In other examples it will not work out that way.  The two wrongs will not make it right.


« Last Edit: April 03, 2018, 03:15:22 am by paulca »
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Wasn't expecting this... C floating point arithmetic
« Reply #38 on: April 03, 2018, 03:38:08 am »
Here are two articles that should make everything a lot clearer:

What Every Programmer Should Know About Floating-Point Arithmetic

http://www.phys.uconn.edu/~rozman/Courses/P2200_15F/downloads/floating-point-guide-2015-10-15.pdf

What Every Computer Scientist Should Know About Floating-Point Arithmetic

http://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf

Read the first link  ;)
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 
The following users thanked this post: paf, agehall, Jacon

Offline IanB

  • Super Contributor
  • ***
  • Posts: 11876
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #39 on: April 03, 2018, 03:46:05 am »
But in essence the computer is making 2 mistakes that just happen to cancel out.

1/3 cannot be represented in binary, so the number it does come up with is WRONG.

When it then multiplies that number by 3, the answer should NOT be 1, so that is also WRONG.

Computers do not make mistakes. They do exactly what their specification says they shall do. (Unless you are claiming a bug or defect in implementation, which will require strong evidence on your part.)

Quote
This isn't exactly computer specific either.  If you try and represent 1/3 in decimal without rounding, you can't do it.  Any number you write down to represent that 1/3 will be wrong.  If you then multiple that wrong number you picked by three it should never produce 1.  If it does, you are wrong.

Unless you round it.

Well, maybe. Maybe not. I can represent 1/3 in decimal as 0.33333... recurring. If I multiply that by 3 I will get 0.99999... recurring. And 0.99999... recurring is mathematically defined as identical to 1.00000... recurring. No rounding needed.

Quote
Do not be fooled by this example giving the correct answer back, the reason I was not expecting it as I know full well how terrible floats and doubles are at storing numbers.

Says who?

Quote
Especially when you are working with larger numbers and trying to retain precision.  Or when multiplying and dividing things repeatedly.  In this case it worked out okay and I didn't expect that.  In other examples it will not work out that way.  The two wrongs will not make it right.

This is not correct. As someone else has already observed in this thread, floating point numbers retain their precision when you multiply and divide them. The whole point of floating point representation is that large numbers retain the same precision as small numbers.

It is addition and subtraction that causes problems.
 

Online MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
Re: Wasn't expecting this... C floating point arithmetic
« Reply #40 on: April 03, 2018, 03:54:18 am »
Testing:
Code: [Select]
#include <stdio.h>
#include <math.h>
 
int main() {

    double divisor, multiplier;
    scanf("%lf %lf", &divisor, &multiplier);
    double a = pow(divisor, -1.0);
    double b = a * multiplier;
    printf( "%.20f * %.2f = %.20f\n", a, multiplier, b  );
}

Outputs:
Code: [Select]
~/Desktop$ ./a.out
9191919191919191 9191919191919191
0.00000000000000010879 * 9191919191919192.00 = 1.00000000000000000000
Obviously, that I overrun precision of the "doubles" - see 92 instead of entered 91 , but result still suspiciously correct.
What is more, if I copy "0.00000000000000010879 * 9191919191919192.00" and manually enter into Calculator (same hardware, and bits width) I'm getting 0.999988889. Difference enormous, fifth  digit after coma is wrong. I'd think, that FPU holds  intermediate results of the calculation, and does somekind of smart-ass correction during consecutive steps of calculation. When Calculator does a math, w/o this "additional track data" fails to output 1.0000
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3717
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #41 on: April 03, 2018, 04:18:22 am »
Do not be fooled by this example giving the correct answer back, the reason I was not expecting it as I know full well how terrible floats and doubles are at storing numbers.  Especially when you are working with larger numbers and trying to retain precision.  Or when multiplying and dividing things repeatedly.

I don't think you actually understand at all the limitations of floating point, and how they are different from fixed point.  I think you have just heard that, and are repeating it without understanding.

In particular, multiplication and division, even "repeatedly" are not major problems.  Multiplication and division individually generate less than 1 ULP error per operation. Even after many operations that tends to not accumulate too much.

The big problems in floating point more generally come down to either subtraction of numbers of very close magnitude, or adding numbers of very different magnitude.  In order to do addition or subtraction, all three operands (in and out) have to be adjusted to the same exponent.  If one of them is much smaller magnitude than the others it will have many fewer significant digits than the format suggests.  This loss of precision can be harmless or totally mess up your algorithm, depending on what you are doing.
 
The following users thanked this post: Jacon

Offline hans

  • Super Contributor
  • ***
  • Posts: 1638
  • Country: nl
Re: Wasn't expecting this... C floating point arithmetic
« Reply #42 on: April 03, 2018, 07:53:17 am »
Testing:
Code: [Select]
#include <stdio.h>
#include <math.h>
 
int main() {

    double divisor, multiplier;
    scanf("%lf %lf", &divisor, &multiplier);
    double a = pow(divisor, -1.0);
    double b = a * multiplier;
    printf( "%.20f * %.2f = %.20f\n", a, multiplier, b  );
}

Outputs:
Code: [Select]
~/Desktop$ ./a.out
9191919191919191 9191919191919191
0.00000000000000010879 * 9191919191919192.00 = 1.00000000000000000000
Obviously, that I overrun precision of the "doubles" - see 92 instead of entered 91 , but result still suspiciously correct.
What is more, if I copy "0.00000000000000010879 * 9191919191919192.00" and manually enter into Calculator (same hardware, and bits width) I'm getting 0.999988889. Difference enormous, fifth  digit after coma is wrong. I'd think, that FPU holds  intermediate results of the calculation, and does somekind of smart-ass correction during consecutive steps of calculation. When Calculator does a math, w/o this "additional track data" fails to output 1.0000

That is because this example doesn't show the strength of floating point at all. In a 20 digit fixed point representation, you only have 5 significant digits in the left hand side of the calculation. All calculation errors start to appear at fifth digit. 32-bit floating points have approximately 6 to 7 significant digits, 64-bit floating points about 14, so you're nulling a fair bit of the operand in manual entry to calculator.

If we consider 32-bit floats:
1.08791 * 10^-16 * 9191919191919192 = 0.999998081
1.087912 * 10^-16 * 9191919191919192 = 0.999999919 (7 leading 9s)
1.0879121 * 10^-16 * 9191919191919192 = 1.000000011 (7 zeros)
etcetera
The last result would get rounded back to 1.0. The 1 before last would be 0.999999940395355224609375, exactly 1 LSB in mantissa from 1.0.
 
The following users thanked this post: MasterT

Online paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4046
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #43 on: April 03, 2018, 10:38:25 am »
Do not be fooled by this example giving the correct answer back, the reason I was not expecting it as I know full well how terrible floats and doubles are at storing numbers.  Especially when you are working with larger numbers and trying to retain precision.  Or when multiplying and dividing things repeatedly.

I don't think you actually understand at all the limitations of floating point, and how they are different from fixed point.  I think you have just heard that, and are repeating it without understanding.

In particular, multiplication and division, even "repeatedly" are not major problems.  Multiplication and division individually generate less than 1 ULP error per operation. Even after many operations that tends to not accumulate too much.

The big problems in floating point more generally come down to either subtraction of numbers of very close magnitude, or adding numbers of very different magnitude.  In order to do addition or subtraction, all three operands (in and out) have to be adjusted to the same exponent.  If one of them is much smaller magnitude than the others it will have many fewer significant digits than the format suggests.  This loss of precision can be harmless or totally mess up your algorithm, depending on what you are doing.

I have real life experience of floating point errors.  I was just finding it difficult to produce the errors I was expecting with simple examples.

A real world example was writing a very simply banking operations.  Using Java BigDecimal that was returned from the SOAP API of the bank mainframe.  We simple deducted a transfer amount from it to display.  The result in one particular test was expected to be 20 turkish lira.  However the result we got was 20.000001... plus a load of garbage.  We didn't have time to explain it, we checked we were doing anything stupid, took the number as a string, truncated it and displayed it.  We had hours not days to fix it.

In the real world with real numbers and slightly poorly thought out code they appear quite often.  The REALLY bad thing about them is that they are fairly difficult to even know they are happening.  Not unless your unit tests are incredibly exhaustive.

But... here's one with addition.  It's not even high precision. 
Code: [Select]
int main() {
    float b = 0.1;
    float c = 0;
    for( int i=0; i<100; i++ ) {
        c += b;
    }
    printf( "%.20f", c );
}

Of course it "looks "contrived" to the academic who would say, "Well just multiple it by 100", but if a is being read in from a user, a bank account or a sensor and just happens to be returning numbers the computer doesn't like, it goes off.  Adding up all the VAT or transaction fees for instance.

Also I didn't necessarily mean multiplying and dividing a number by the same amount each time, I mean in real world elongated calculations in involving for loops and real world real numbers.  It is easy to forget, as you often don't print your intermediates that you are spanning large ranges of magnitude and precision.

So I still stand by, computer suck at maths, make simple errors, can't represent real world numbers very well and without trying to avoid such mistakes or identify them when they do happen your code can end up fairly badly out in it's calculations.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline daveshah

  • Supporter
  • ****
  • Posts: 356
  • Country: at
    • Projects
Re: Wasn't expecting this... C floating point arithmetic
« Reply #44 on: April 03, 2018, 10:47:53 am »
I understand financial applications, at least historically, used decimal fixed point to avoid these kind of disconcerting errors and match conventional accounting.
 

Online paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4046
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #45 on: April 03, 2018, 10:51:56 am »
Computers do not make mistakes. They do exactly what their specification says they shall do. (Unless you are claiming a bug or defect in implementation, which will require strong evidence on your part.)

Go and buy a top end gaming rig.  Read the spec very carefully.  Full fill all of it's requirements fully and then overclock it to it's maximum allowed figures, from the spec.

Now, run Prime95 on it for a day.

You may see that Prime95 will fail as the CPU has provided incorrect prime numbers.

Note these prime errors are not crashes, we are not talking about lock ups.  We are talking about arithmetic errors.  Also, by it's design Prime95 runs entirely in the core to remove the latency of memory access.

Most computers are sold, considerably throttled back from their "lab maximum" potential for this reason. 

Besides, it depends on your context as to what a "mistake" is.

If you calculate and add up all the VAT for a million orders and are out by 15 cents at the end, explaining the floating point spec the the accountant ... do you think he will say, "Oh, okay then, it's not a mistake", of course not.

If you run a PC 24/7 and due to a random burst of RF or a mains spike causes a bit error in the CPU which results in software crashing, in the academic world of hardware, electronics and text books you could argue that was not a mistake.  That is like saying because I am hungover that adding 1 and 2 to get 4 is not a mistake either.
« Last Edit: April 03, 2018, 10:54:25 am by paulca »
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Online paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4046
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #46 on: April 03, 2018, 11:01:13 am »
Here is another slightly contrived example, but it is one that a programmer could make so very, very easily.

Code: [Select]
int main() {
    float a = 1;
    double b = 1;
    a = a / 3.0f;
    b = b / 3.0f;
    if( a == b ) {
        printf( "Yes" );
        return 0;
    }
    printf("No");
}

and another...

Code: [Select]
int main() {
    float a = 1;
    float b = 1;
    float c = 10;
    a = a / c;
    b = b / c;
    b = b / c;
    b = b * c;
    if( a == b ) {
        printf( "Yes" );
        return 0;
    }
    printf("No");
}

« Last Edit: April 03, 2018, 11:04:21 am by paulca »
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline glarsson

  • Frequent Contributor
  • **
  • Posts: 814
  • Country: se
Re: Wasn't expecting this... C floating point arithmetic
« Reply #47 on: April 03, 2018, 11:19:36 am »
A properly educated programmer almost never compares floating point values using "==", only in very special and rare cases.

Also, don't use floating point to represent money. It leads to all sorts of nasty problems. A share price of 100.0000001 is not the same as 99.99999999999 even if displayed rounded to two decimal places. Expensive mistake...
 

Online paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4046
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #48 on: April 03, 2018, 11:43:32 am »
A properly educated programmer almost never compares floating point values using "==", only in very special and rare cases.

Correct, it was to highlight they are not the same.  So if you take those two numbers and carry on with further calculations they may continue to diverge.

Academically if the two numbers should be the same, but aren't then using them in further calculations may produce more incorrect results.

Compound errors.

It takes very careful thought to arrange sequences of calculations to prevent compound errors resulting from these diverging approximations caused by floating point.

It takes even more careful and tedious testing to identify that your sums are wrong in the event it happens.

Quote
Also, don't use floating point to represent money. It leads to all sorts of nasty problems. A share price of 100.0000001 is not the same as 99.99999999999 even if displayed rounded to two decimal places. Expensive mistake...

It's funny you mention that.  Because I have worked in a stock exchange.  They DO use doubles to calculate financial values.  However they are not compared with == but with < and > but usually you would not be working with decimals that fine.  An example might be calculating the mid point (and other market data indicators) of a board.  Something that happens potentially hundreds of thousands of times a second for each instrument.

The only routine I can remember that wasn't just comparing numbers against a reference was calculating aggregate long/short position margins on a credit limit filter for which we were provided the equation in a PDF and the first thing it did was round down all the financial values to whole dollar amounts.  The filter was not concerned with you 29c orders, but your broker sitting on a long position of 10 million dollars, not offset against the short position to be able to settle.

There were also routines using fixed point mathematics.  Advanced math libraries, or even Math.* were not permitted in my domain as my code was critical order flow path.  No boost either.  Raw, low level C, sub microsecond latency wire to wire.  Thankfully most of the code just translated between different exchange and broker message formats and totalled up risk.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23018
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #49 on: April 03, 2018, 11:48:07 am »
We're "slow finance" and we use our own math library because we have all the time in the world :)

It's actually a typed constraint solver engine which uses rational, precise decimal types only and any loss of precision has to be manually accounted for i.e. conversion to and from double precision values. It generates C# code which is fast and has no assumptions in it or precision loss or human errors (assuming the input was correct)

edit: based on SICP 3.3.5: https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-22.html#%_sec_3.3.5
« Last Edit: April 03, 2018, 11:51:20 am by bd139 »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf