Author Topic: Wasn't expecting this... C floating point arithmetic  (Read 14234 times)

0 Members and 1 Guest are viewing this topic.

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Wasn't expecting this... C floating point arithmetic
« on: April 02, 2018, 10:33:29 am »
So I ran this:

Code: [Select]
#include <stdio.h>
 
int main() {
    float a = (1/3.0);
    float b = a*3;
    printf( "%.20f * 3 = %.20f\n", a, b  );
}

The output I was not expecting.

Code: [Select]
paul@localhost ~ $ gcc test.c
paul@localhost ~ $ ./a.out
0.33333334326744079590 * 3 = 1.00000000000000000000

My reaction of course was:  "No it's not!"

I was expecting the wrong answer for the right reasons, but got the right answer for the wrong reasons.

I'm assuming there is some compiler cancellation in effect.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Wasn't expecting this... C floating point arithmetic
« Reply #1 on: April 02, 2018, 10:47:35 am »
A float is 'only' 32 bit with (AFAIK) 5 bits for the mantissa and 1 bit for the sign which leave 26 bits to hold a number between 0 and 67 million. 67 million is close to 8 decimal digits of resolution. On top of that a floating point isn't infinitely accurate. A float is still a number using a finite number of bits so printing it with 20 decimals isn't sensible. All in all the outcome doesn't surprise me.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21686
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Wasn't expecting this... C floating point arithmetic
« Reply #2 on: April 02, 2018, 10:50:18 am »
Well, 0.33333334326744079590 * 3 = 1.0000000298023223877, well within what a float can represent.

Or do you mean that 1.0f is (whatever), because most decimals cannot be represented exactly?

Actually, 1.0f isn't one of those, is it?  So it should be exact?

1 LSB at that exponent is a whopping 1.19e-7, so the 2.98e-8 residue should round down nicely, no?

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #3 on: April 02, 2018, 11:01:11 am »
The point is the compiler is hiding the fact that computers can't do maths.  I would expect this of a higher level language but not of C.

0.33333334326744079590 * 3 IS NOT = 1.00000000000000000000

Not on any planet.

0.33333334326744079590 * 3 = 1.00000002980232238770

Sounds much more realistic.  Going to higher precision is pointless as it loses the plot at the 8th decimal, in this example.

But because the compiler sees (I assume) that I previously divided that number of 3 and I'm now multiplying it by 3 it cancels the two out and just returns the 1.

EDIT, I believe that 1.0 can be represented correctly as a float.  0.1 can't.  There is a nice article about this in the Python docs and how to get python to print it's raw representation of the number and it then explains some of the mechanisms used to keep things sane.
« Last Edit: April 02, 2018, 11:03:17 am by paulca »
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21686
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Wasn't expecting this... C floating point arithmetic
« Reply #4 on: April 02, 2018, 11:06:36 am »
The point is the compiler is hiding the fact that computers can't do maths.  I would expect this of a higher level language but not of C.

Why are you accusing the compiler of making an error you told it to commit?

Why don't you inspect the .lst and see what it's really doing?

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #5 on: April 02, 2018, 11:43:58 am »
The point is the compiler is hiding the fact that computers can't do maths.  I would expect this of a higher level language but not of C.

Why are you accusing the compiler of making an error you told it to commit?

Why don't you inspect the .lst and see what it's really doing?

Tim

I think you might be missing the point.  I am accusing the compiler of NOT making the error I told it to commit. Of hiding the error it should have made.

(1/3)*3, answer 1.  This is correct mathematically, but not what floating point arithmetic gives you.

A computer should NOT be able to answer (1/3)*3 correctly using pure floating point arithmetic.  But it does give the correct answer.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Wasn't expecting this... C floating point arithmetic
« Reply #6 on: April 02, 2018, 11:54:57 am »
As others have said, the C 'float' type has limited precision. What you get is perfectly normal.
Floating point is just an approximation, and for single-precision FP, you can take a look at this: https://en.wikipedia.org/wiki/Single-precision_floating-point_format

If you want much better precision, use the 'double' type. It's double-precision FP and uses 64 bits on most CPUs.
Except on microcontrollers or very specific targets, or to save storage (32-bit vs 64-bit), there is no good reason to use the 'float' type nowadays in C. On CPUs that have a double precision FPU (floating point unit), using 'double' will even be faster (and of course have much better precision), because this is the CPU's native FP representation. Pretty much all modern CPUs (that are not classified as MCUs) now have double-precision FPUs. I usually cringe when I see "floats" in C code targeted at PCs.

Now on most microcontrollers, this is different. Some now have an integrated FPU (like the ARM Cortex M4 cores), most often a single-precision one, so the 'float' type makes sense. And for those without integrated FPUs, floating point operations can be very expensive in execution time.

If the limited precision of single-precision FP is an issue in your application AND you can't use double-precision for a good reason (such as performance) OR you need to control the representation of your numbers in a specific manner, you have to resort to using integers. Fixed-point arithmetic ( https://en.wikipedia.org/wiki/Fixed-point_arithmetic ) can be a nice alternative to floating point. And if you only ever deal with rational numbers, you can also use a rational representation (a couple of two integers (a, b) representing a/b). In both cases, all operations can be done with integers.
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1640
  • Country: nl
Re: Wasn't expecting this... C floating point arithmetic
« Reply #7 on: April 02, 2018, 11:58:21 am »
A float is 'only' 32 bit with (AFAIK) 5 bits for the mantissa and 1 bit for the sign which leave 26 bits to hold a number between 0 and 67 million. 67 million is close to 8 decimal digits of resolution. On top of that a floating point isn't infinitely accurate. A float is still a number using a finite number of bits so printing it with 20 decimals isn't sensible. All in all the outcome doesn't surprise me.

The mantissa is a 23 bit normalized fixed point number, exponent is 8-bits with bias 127, and of course the sign bit.
It means the mantissa can represent in the range of 1.0000000 and 1.99999994, which confirms the "float is good for 7 digits"

Anyway, you could do the arithmetic by hand, of course:
1.0 -> exact, exponent=0, mantissa=1.0
0.33 (reoccuring) -> best approximated as exponent=-2, mantissa=1.01010101010101010101011
3.0 -> exact, exponent=1, mantissa=1.1

In order to do the multiplication, you can just multiply both mantissa's, add the exponents and finally normalize the number.
Multiply mantissa; consists of 2 additions: 1.01010101010101010101011 + 0.101010101010101010101011 =~ 10.0
The addition of both exponents -2+1 = -1
We see that the new mantissa is not normalized -> in order to normalize you shift the mantissa 1 to right, add 1 to exponent. Now we have mantissa=1.0 and exponent=0
Which means we get the same exact result back, in this case... I think this is a coincidence.

E.g. calculate 1/99 - 1/100. Algebraically we know this is 1/9900, but :
Code: [Select]
        float a = 1/99.0f;
        float b = 1/100.0f;
        printf("%.20f %.20f", a-b, 1.0/(a-b));
Code: [Select]
./a.out
0.00010101031512022018 9899.97901511170130106620
Whoops! That's already -0.00021% error. Do this kind of calculation 100 times for a particular algorithm, and you're suddenly at -0.02% error just from representation and arithmetic errors.

Floating point multiplication and division are actually the simplest and best kind of operations you can do. Floating point addition/subtraction is a complete horror show. That is because in that case you need to align the bits of both mantissa's to do a fixed point addition. Then if you're dealing with numbers that are 5 orders of magnitude different, you're shifting 5 bits of mantissa out in 1 of your operands. Floating point units do use slightly wider adders though (some "guard bits" are kept), but these cannot prevent that calculation errors start to appear.

Also look at the rounding mode your floating point unit is operating in. IEEE 754 floats have standardized several rounding modes when these bits need to be truncated/rounded, and it can make a difference in what direction errors accumulate. Floating point always has some errors associated with them, but if you can average errors around 0 you're doing it right.

edit:

Overview of different rounding modes:
Code: [Select]
#include <stdio.h>
#include <fenv.h>
int main() {
    fesetround(FE_TONEAREST);
    float a = 1/3.0f;
    float b = a*3.0f;
    printf("FE_TONEAREST: %.20f %.20f\n", a, b);
    fesetround(FE_UPWARD);
    a = 1/3.0f;
    b = a*3.0f;
    printf("FE_UPWARD: %.20f %.20f\n", a, b);
    fesetround(FE_DOWNWARD);
    a = 1/3.0f;
    b = a*3.0f;
    printf("FE_DOWNWARD: %.20f %.20f\n", a, b);
    fesetround(FE_TOWARDZERO);
    a = 1/3.0f;
    b = a*3.0f;
    printf("FE_TOWARDZERO: %.20f %.20f\n", a, b);
}
Build with g++:
Code: [Select]
$ g++ test.c
$ ./a.out
FE_TONEAREST: 0.33333334326744079590 1.00000000000000000000
FE_UPWARD: 0.33333334326744079590 1.00000011920928955079
FE_DOWNWARD: 0.33333334326744079589 1.00000000000000000000
FE_TOWARDZERO: 0.33333334326744079589 1.00000000000000000000
I think that upward gives a different result, because in the hand calculation I already threw away 1 bit of the mantissa addition that was on position 24. I think that after normalization and the bit appearing on position 25, it is still rounded up.
« Last Edit: April 02, 2018, 12:25:18 pm by hans »
 
The following users thanked this post: newbrain

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #8 on: April 02, 2018, 12:24:45 pm »
EDIT:  I just spotted your comments about rounding modes.

Code: [Select]
int main() {
    float a = (1/3.0);
    float b = a*3;
}

Code: [Select]
gcc -S -O0 test.c

Code: [Select]
    .file   "test.c"
    .text
    .globl  main
    .type   main, @function
main:
.LFB0:
    .cfi_startproc
    pushq   %rbp
    .cfi_def_cfa_offset 16
    .cfi_offset 6, -16
    movq    %rsp, %rbp
    .cfi_def_cfa_register 6
    movss   .LC0(%rip), %xmm0
    movss   %xmm0, -8(%rbp)
    movss   -8(%rbp), %xmm1
    movss   .LC1(%rip), %xmm0
    mulss   %xmm1, %xmm0
    movss   %xmm0, -4(%rbp)
    movl    $0, %eax
    popq    %rbp
    .cfi_def_cfa 7, 8
    ret
    .cfi_endproc
.LFE0:
    .size   main, .-main
    .section    .rodata
    .align 4
.LC0:
    .long   1051372203
    .align 4
.LC1:
    .long   1077936128
    .ident  "GCC: (Gentoo 6.4.0-r1 p1.3) 6.4.0"
    .section    .note.GNU-stack,"",@progbits

My x86 assembler sucks, but is it storing two constants and only doing one multiplication here?  I googled xmm0/xmm1 and they are 128bit registers that support multiple parallel multiplications of single or double precision numbers.

Yes, this was on a 64bit AMD CPU.

I wonder if I ask the AVR the same question will it answer differently...

Nope.  1.0

The assembler out put from gcc-avr is much longer, but again it does not refer to the constants 1 or 3 anywhere.  It also seems to use a "call" to a global address called _mulsf3 to do the multiplication.  Has me baffled.

"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1640
  • Country: nl
Re: Wasn't expecting this... C floating point arithmetic
« Reply #9 on: April 02, 2018, 12:35:34 pm »
Oh sorry, yes I did edit my post quite a bit :)

I think it is a `coincidence` that this calculation returns to the same value. As you now saw, with the appriorate rounding mode and the least significant bits in mantissa (that are actually out of range), you can make float show some errors.
Or in the other calculation that introduced alot of leading zeros in a single subtraction, thus making the less significant digits suddenly more significant, introducing large errors.

These are your 0.33333334 and 1.0 right there (convert decimal to hex, then put in here):
Code: [Select]
.LC0:
    .long   1051372203
    .align 4
.LC1:
    .long   1077936128

These are also used in the code:
Quote
    movss   .LC0(%rip), %xmm0
    movss   %xmm0, -8(%rbp)
    movss   -8(%rbp), %xmm1
    movss   .LC1(%rip), %xmm0
    mulss   %xmm1, %xmm0
    movss   %xmm0, -4(%rbp)
    movl    $0, %eax

MOVSS is Scalar Single Precision Floating point.
MULSS is the actual multiplication.

So you can be pretty confident it did do the calculation without prior optimization, and comes to the same result as I showed by hand.
« Last Edit: April 02, 2018, 12:40:01 pm by hans »
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Wasn't expecting this... C floating point arithmetic
« Reply #10 on: April 02, 2018, 12:36:33 pm »
@Paulca: You should read more about the basics of floating point. Ofcourse the AVR gives the same result (even though it is obviously doing sof-floating point) because floating point operations are standarised to make sure a program gives the same result no matter the platform it runs on.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline Naguissa

  • Regular Contributor
  • *
  • Posts: 114
  • Country: es
    • Foro de electricidad, electrónica y DIY / HUM en español
Re: Wasn't expecting this... C floating point arithmetic
« Reply #11 on: April 02, 2018, 12:38:36 pm »
Simply: compiler optimizations

Enviado desde mi Jolla mediante Tapatalk


Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #12 on: April 02, 2018, 12:45:55 pm »
@Paulca: You should read more about the basics of floating point. Ofcourse the AVR gives the same result (even though it is obviously doing sof-floating point) because floating point operations are standarised to make sure a program gives the same result no matter the platform it runs on.

Again you are incorrectly assuming my post is about confusion over loss of precision with floating point arithmetic.  If you read it carefully or run the test programs yourself you will see that is not what my post is about.

My post is the exact opposite as the answer the program gives is CORRECT when compared to what the actual floating point answer is, which would be wrong.

The comments about rounding modes explains it.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #13 on: April 02, 2018, 12:50:47 pm »
Again you are incorrectly assuming my post is about confusion over loss of precision with floating point arithmetic.

Code: [Select]
    printf( "%.20f * 3 = %.20f\n", a, b  );
IMO, your error is in the line I excerpted, which suggests at least a certain amount of confusion over the precision available in floating point arithmetic.

You can't (productively) get 20 digits after the decimal point from a float. Asking the compiler to do exactly that and then doing math predicated on that answer being exact is what is leading others to comment on it.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8646
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #14 on: April 02, 2018, 12:50:59 pm »
@Paulca: You should read more about the basics of floating point. Ofcourse the AVR gives the same result (even though it is obviously doing sof-floating point) because floating point operations are standarised to make sure a program gives the same result no matter the platform it runs on.
In most cases the result of each floating point operation strictly follows IEEE754 these days. However, the results from compiled code typically varies between compilers, as different optimisation strategies cause operations to be conducted in different orders. Calculation order generally has some effect on the final result. However, it can massively affect the answer when the calculation is somewhat mathematically unstable (e.g. at some point in the calculation, the small difference between 2 large numbers is used).
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #15 on: April 02, 2018, 01:25:59 pm »
You can't (productively) get 20 digits after the decimal point from a float. Asking the compiler to do exactly that and then doing math predicated on that answer being exact is what is leading others to comment on it.

Sorry, but that's just piss pedantry.  You know fine well that float((1/3)*3) != 1.  Calculating exactly what it would equal would just be mathurbation and 99% irrelevant to the point of the discussion.

EDIT:  And nobody asked the compiler to do anything it couldn't.  %.20f has nothing to do with the compiler.  It is a token passed to a function.  If anything I was asking printf to do something it wasn't capable of or outside of the sensible limits.  That depends on the implementation of printf, and not the compiler (not directly anyway).

But the point was that running the program DOES give the correct answer.  I'm fairly sure this is in the rounding of the number within printf, within the binary to decimal conversion, not in the calculation or compiler optimisations.  However, do note that the compiler calculated the 1/3 not the CPU.  The 1/3 was stored as a constant.

Using %.20f was simply to expand the precision of the printf statement to where it would definitely not effect the raw values.  It seems even then that it DOES effect them.

Note that a 32 bit float can store up to 38 digits after the decimal point, though only storing very small numbers.  Small numbers high precision, large numbers low precision. 

I did all of this in uni, right down to doing floating point calculations with a pen and paper under exam conditions.  That was well nearly 20 years ago. There is a difference between understanding something and remembering all the details.   The details are not something I use everyday, so I do not "store" them in my head.  The implications of floating point precision loss I DO use on a daily basis, so I at least remember to be careful with them and write routines to maximise precision.  Should I ever actually need to work out the details of a floating point operation I certainly can.  I choose not to for this post as.. it would be mathurbation.
« Last Edit: April 02, 2018, 01:30:41 pm by paulca »
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 
The following users thanked this post: hans

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23024
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #16 on: April 02, 2018, 01:45:07 pm »
This entire thread is one reason I like common LISP. It has ratio types, there is no magnitude limit on integers and complex numbers are built in. A lot of the time you don't need floats where in C you do, or you have to fuck around and write your own stuff.
 

Offline snarkysparky

  • Frequent Contributor
  • **
  • Posts: 414
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #17 on: April 02, 2018, 01:48:10 pm »
compiler might have optimized out the calculation ??
 

Offline dmills

  • Super Contributor
  • ***
  • Posts: 2093
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #18 on: April 02, 2018, 01:57:43 pm »
Try declaring a and b as volatile float, I am betting the compiler is optimising things out.

Regards, Dan.
 

Offline Naguissa

  • Regular Contributor
  • *
  • Posts: 114
  • Country: es
    • Foro de electricidad, electrónica y DIY / HUM en español
Re: Wasn't expecting this... C floating point arithmetic
« Reply #19 on: April 02, 2018, 01:58:04 pm »
You can't (productively) get 20 digits after the decimal point from a float. Asking the compiler to do exactly that and then doing math predicated on that answer being exact is what is leading others to comment on it.

Sorry, but that's just piss pedantry.  You know fine well that float((1/3)*3) != 1.  Calculating exactly what it would equal would just be mathurbation and 99% irrelevant to the point of the discussion.

EDIT:  And nobody asked the compiler to do anything it couldn't.  %.20f has nothing to do with the compiler.  It is a token passed to a function.  If anything I was asking printf to do something it wasn't capable of or outside of the sensible limits.  That depends on the implementation of printf, and not the compiler (not directly anyway).

But the point was that running the program DOES give the correct answer.  I'm fairly sure this is in the rounding of the number within printf, within the binary to decimal conversion, not in the calculation or compiler optimisations.  However, do note that the compiler calculated the 1/3 not the CPU.  The 1/3 was stored as a constant.

Using %.20f was simply to expand the precision of the printf statement to where it would definitely not effect the raw values.  It seems even then that it DOES effect them.

Note that a 32 bit float can store up to 38 digits after the decimal point, though only storing very small numbers.  Small numbers high precision, large numbers low precision. 

I did all of this in uni, right down to doing floating point calculations with a pen and paper under exam conditions.  That was well nearly 20 years ago. There is a difference between understanding something and remembering all the details.   The details are not something I use everyday, so I do not "store" them in my head.  The implications of floating point precision loss I DO use on a daily basis, so I at least remember to be careful with them and write routines to maximise precision.  Should I ever actually need to work out the details of a floating point operation I certainly can.  I choose not to for this post as.. it would be mathurbation.
Compiler optimizes code by default, you need to tell it to not do so if you want.

Compiler sees:

x/3
x*3

So it assumes by default x.

Enviado desde mi Jolla mediante Tapatalk


Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21686
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Wasn't expecting this... C floating point arithmetic
« Reply #20 on: April 02, 2018, 02:27:47 pm »
Sorry, but that's just piss pedantry.  You know fine well that float((1/3)*3) != 1.  Calculating exactly what it would equal would just be mathurbation and 99% irrelevant to the point of the discussion.

I know fine well that it does; the question is, why did you expect it to do otherwise, and what were you expecting in that case?

The error of the first operation is a fractional LSB, and the cumulative error after the second operation is less than a quarter LSB high.  You store the result back in a float, necessarily causing rounding.  Why would you expect to get anything other than 1.0f (unless the FPU was specifically directed to do otherwise, as hans's example illustrates)?

So, again, why are you berating the compiler for something you told it to do?  I don't understand your thought process here. ??? ???

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: newbrain, Jacon

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #21 on: April 02, 2018, 02:33:17 pm »
I know fine well that it does; the question is, why did you expect it to do otherwise, and what were you expecting in that case?

A binary system is incapable of representing a 1/3.

Decimal is incapable of representing a 1/3.

If you multiple a 1/3 in either decimal or binary by 3 you will not get 1.

So I did not expect 1.

In decimal.

0.3 * 3 = 0.9
0.33 * 3 = 0.99
0.333 * 3 = 0.999

Forever.  It will never be 1.  It doesn't matter how many 3s you add.  You cannot represent 1/3 in decimal.

In binary you can add 1/4 + 1/16 + 1/64th and keep on going for as many digits as you want, but when you multiple it by 3 it will not equal 1.

It won't matter if I use a 32bit float a 64bit double a 128bit double or every one of the bits of RAM in my 16Gb computer it will never equal to 1.
« Last Edit: April 02, 2018, 02:38:57 pm by paulca »
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21686
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Wasn't expecting this... C floating point arithmetic
« Reply #22 on: April 02, 2018, 02:36:09 pm »
Compiler optimizes code by default, you need to tell it to not do so if you want.

Compiler sees:

x/3
x*3

So it assumes by default x.

This is easily verified by reading the output listing.

As shown in #8, at -O0, it performs the operations exactly as written.  There can be no confusion, what the compiler is trying to do in this case.  :)

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: hans, newbrain

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21686
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Wasn't expecting this... C floating point arithmetic
« Reply #23 on: April 02, 2018, 02:43:10 pm »
A binary system is incapable of representing a 1/3.

Decimal is incapable of representing a 1/3.

If you multiple a 1/3 in either decimal or binary by 3 you will not get 1.

So I did not expect 1.

So, what?  You were trying to trap the computer in a lie?  And you got annoyed when you discover it is incapable of telling a lie, for it is a computer? ;D

Let me put it this way --

Don't say to yourself: "well, it CAN'T DO IT".  Ask yourself: "alright, it's not exact, but how inexact is it?"

Since the computer is incapable of exactly representing 1/3, how far off should its nearest possible value be?

Then, after multiplying that value by 3 (actually 1.5 x 2^1, an exact representation), how far off will it be?

Finally, when crammed back into a float, what is the final result?

Indeed, how could it be anything but the correct value? :)

Just because "two wrongs don't make a right" is a catchy saying, doesn't mean it's always true, in real life or in a computer!  Often, two wrongs cancel out.  Indeed, quite often in binary, this happens. :)

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #24 on: April 02, 2018, 02:47:19 pm »
Just because "two wrongs don't make a right" is a catchy saying, doesn't mean it's always true, in real life or in a computer!  Often, two wrongs cancel out.  Indeed, quite often in binary, this happens. :)

Fair enough.  I was seeing the first wrong, but slightly missing the second wrong.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #25 on: April 02, 2018, 02:48:58 pm »
As an aside... "bc" the linux utility, uses fixed precision.

If you execute:

scale=30000;  (1/3)*3

You get 0 followed by 30,000 9s.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline helius

  • Super Contributor
  • ***
  • Posts: 3642
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #26 on: April 02, 2018, 04:05:24 pm »
Of course, we all (should) know that 0.999... is equal to 1.0. I believe that bc uses a decimal representation, which (like binary) has no finite representation for a third.
 

Offline ogden

  • Super Contributor
  • ***
  • Posts: 3731
  • Country: lv
Re: Wasn't expecting this... C floating point arithmetic
« Reply #27 on: April 02, 2018, 04:48:48 pm »
Better test to understand "problem" (which I do not see as such) would be this one:

#include <stdio.h>
 
int main() {
double q = 1.0;
float z;

  do {
    q = q + 0.0000000000001;
    z = q;
  } while (z == 1.0);
 
  printf( "q = %.20f\n", q );
  printf( "z = %.20f\n", z );

}


I do not show result for a reason. When you see result of the code above, you supposedly will understand result of original code. Actually knowing that std precision float have 24 significant bits, is enough :)
« Last Edit: April 02, 2018, 05:05:40 pm by ogden »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Wasn't expecting this... C floating point arithmetic
« Reply #28 on: April 02, 2018, 05:22:54 pm »
There is a fundamental flaw apparent here: confusion of arithmetic in computers with arithmetic in maths. The two are different.

A good starting point for the the theory and practice of computer arithmetic is http://people.ds.cam.ac.uk/nmm1/Arithmetic/index.html
Quote
How Computers Handle Numbers.
This could be called "Computer Arithmetic Uncovered". It covers everything that a scientific programmer needs to know about basic arithmetic, for most of the commonly used scientific languages and several applications. Most of what it was true and relevant in 1970, and will probably be so in 2070. It describes how computers store and process integers and floating point numbers (real and complex), the exceptions that might arise and what they mean. The intent is to describe how to get reliable answers for a reasonable amount of effort, and be able to understand strange results and effects

Maclaren has been at the sharp end of many such problems - and how to avoid them.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: agehall

Offline AlfBaz

  • Super Contributor
  • ***
  • Posts: 2184
  • Country: au
Re: Wasn't expecting this... C floating point arithmetic
« Reply #29 on: April 02, 2018, 07:14:34 pm »
Haven't read the thread in detail so I probably shouldn' t post but any ways.

String literals are promoted to doubles so although you declare a variable as float (single precision) and assign it to a string literal it will get treated as a double during the calculation (ie double lib funcs called) with the result truncated and stored in your float variable. If you want only floating point calculations add f to your string literals, for example

    float a = (1/3.0)f;
    float b = a*3f;
 

Offline iMo

  • Super Contributor
  • ***
  • Posts: 4785
  • Country: pm
  • It's important to try new things..
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #31 on: April 02, 2018, 09:07:52 pm »
String literals are promoted to doubles so although you declare a variable as float (single precision) and assign it to a string literal it will get treated as a double during the calculation (ie double lib funcs called) with the result truncated and stored in your float variable. If you want only floating point calculations add f to your string literals, for example

    float a = (1/3.0)f;
    float b = a*3f;
I'd call those float literals, not string literals. (I would call "foo" or "3" a string literal, but not a bare 3 in code.)

Neither of those is valid c++, at least according to gcc.
These would be:

    float a = (1/3.0f);
    float b = a*3.0f;

You can't "f" the parenthsized expression. You have to "f" the float constant.
(I didn't realize this until I tried it, but) You can't "f" a decimal constant, at least not in gcc.
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1640
  • Country: nl
Re: Wasn't expecting this... C floating point arithmetic
« Reply #32 on: April 02, 2018, 09:18:09 pm »
I did all of this in uni, right down to doing floating point calculations with a pen and paper under exam conditions.  That was well nearly 20 years ago. There is a difference between understanding something and remembering all the details.   The details are not something I use everyday, so I do not "store" them in my head.  The implications of floating point precision loss I DO use on a daily basis, so I at least remember to be careful with them and write routines to maximise precision.  Should I ever actually need to work out the details of a floating point operation I certainly can.  I choose not to for this post as.. it would be mathurbation.

I'm still amid all those exams, which is perhaps why I went straight to the bit level approach, and very well explains the different approaches to this problem. I can't blame anyone for that. The computations fundamentally are not very hard, just tedious, which is why most won't (and shouldn't) bother.

#21 explains the problem quite well with the decimal example.
In contrast: I'm pretty sure one could also design a floating point unit with radix 3. Then representing 1/3 becomes trivial. But why would anyone do that in a computer? It makes no sense.
Just like floating slash representations are not used in computers. Could also do the trick. For all we know, they could be used extensively in some ASICs that need to some very niche calculation at a high rate, but an average engineer will never see those.. (and even then designing completely customized arithmetic units is probably quite unusual)

I think it is more important to understand what happens in a floating point unit, that floats are not perfect, which phenomena may happen, and how to battle them.
 

Offline DBecker

  • Frequent Contributor
  • **
  • Posts: 326
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #33 on: April 02, 2018, 09:50:41 pm »
Haven't read the thread in detail so I probably shouldn' t post but any ways.

String literals are promoted to doubles so although you declare a variable as float (single precision) and assign it to a string literal it will get treated as a double during the calculation (ie double lib funcs called) with the result truncated and stored in your float variable. If you want only floating point calculations add f to your string literals, for example

    float a = (1/3.0)f;
    float b = a*3f;


This is the correct answer.

It might seem to be a little close to weasel-language-lawyer details, but C is specified to allow the compiler to do a substantial amount of arithmetic at compile time.  Calculations at compile time must be at least to the resolution/range/precision of the run-time, and are allowed to be arbitrarily more precise/correct.

Even at run time, intermediate results may be kept in a higher precision/resolution format.  The best-known example is Intel co-processors doing IEEE 80 bit floating point, converting from and to the in-memory format (32 or 64 bits) only when loading or storing.

Allow the compiler and run time to do this has a substantial positive effect on performance, code size, and even correctness.  The latter when there are layers of macros with scaling and offsets that might over- or under-flow if the operations were done naively.
 
« Last Edit: April 02, 2018, 10:02:01 pm by DBecker »
 

Offline Naguissa

  • Regular Contributor
  • *
  • Posts: 114
  • Country: es
    • Foro de electricidad, electrónica y DIY / HUM en español
Re: Wasn't expecting this... C floating point arithmetic
« Reply #34 on: April 02, 2018, 09:56:12 pm »
Compiler optimizes code by default, you need to tell it to not do so if you want.

Compiler sees:

x/3
x*3

So it assumes by default x.

This is easily verified by reading the output listing.

As shown in #8, at -O0, it performs the operations exactly as written.  There can be no confusion, what the compiler is trying to do in this case.  :)

Tim
You are right, i didn't see that message until now... (on mobile)

Enviado desde mi Jolla mediante Tapatalk


Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #35 on: April 02, 2018, 10:07:39 pm »
Quote
0.33333334326744079590 * 3 = 1.00000002980232238770
Yeah, fine.   And then you store 1.00000002980232238770 in a 32bit float, and you get 1.000000 because the least significant bits disappear.  What's the problem?

This has nothing to do with compiler optimization or "cleverness."   Before you make accusations like that, you should write code that the compiler CAN'T optimize, and see if you get the same results.

Code: [Select]
#include <stdio.h>
 
int main() {
    float divisor, multiplier;
    scanf("%f %f", &divisor, &multiplier);
    float a = (1/divisor);
    float b = a*multiplier;
    printf( "%.20f * %f = %.20f\n", a, multiplier, b  );
}
Quote
./a.out
3.0 3.0
0.33333334326744079590 * 3.000000 = 1.00000000000000000000
« Last Edit: April 02, 2018, 10:09:13 pm by westfw »
 

Online IanB

  • Super Contributor
  • ***
  • Posts: 11891
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #36 on: April 03, 2018, 01:43:21 am »
A binary system is incapable of representing a 1/3.

Decimal is incapable of representing a 1/3.

If you multiple a 1/3 in either decimal or binary by 3 you will not get 1.

So I did not expect 1.

Where your analysis has gone wrong here is a failure to understand how computers handle floating point numbers.

When a decimal value is converted to binary, the computer is supposed to choose the bit pattern that most closely approximates the decimal value, i.e. the binary representation that has the least error compared to the input value.

Similarly, when a binary value is converted back to decimal, the computer is supposed to produce the decimal string that most closely corresponds to the binary value, again the decimal representation that has the least error compared to the original binary value.

So it may happen, when going through the sequence of operations in your test program, that 1.0000000 represents the binary result of the computation with less error than 0.9999999. In that case, the computer will output 1.0000000.

(The fact that you have asked for 20 decimals in the output has just caused the computer to append another 13 or so zeros to the actual answer. It was pretty much as waste of time doing so, as it doesn't change the computation in any way.)

This was all very clearly illustrated by hans, who did the calculation here:

Anyway, you could do the arithmetic by hand, of course:
1.0 -> exact, exponent=0, mantissa=1.0
0.33 (reoccuring) -> best approximated as exponent=-2, mantissa=1.01010101010101010101011
3.0 -> exact, exponent=1, mantissa=1.1

In order to do the multiplication, you can just multiply both mantissa's, add the exponents and finally normalize the number.
Multiply mantissa; consists of 2 additions: 1.01010101010101010101011 + 0.101010101010101010101011 =~ 10.0
The addition of both exponents -2+1 = -1
We see that the new mantissa is not normalized -> in order to normalize you shift the mantissa 1 to right, add 1 to exponent. Now we have mantissa=1.0 and exponent=0
Which means we get the same exact result back, in this case... I think this is a coincidence.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #37 on: April 03, 2018, 03:12:06 am »
A binary system is incapable of representing a 1/3.

Decimal is incapable of representing a 1/3.

If you multiple a 1/3 in either decimal or binary by 3 you will not get 1.

So I did not expect 1.

Where your analysis has gone wrong here is a failure to understand how computers handle floating point numbers.


Really?  So which line in the quote you took is incorrect?

It is correct to say that due to binary being unable to represent 1/3, and any number that is not 1/3 multiplied by 3 is not 1.  Therefore, logically speaking, binary 1/3 * 3 cannot ever be equal to 1.

Quote
When a decimal value is converted to binary, the computer is supposed to choose the bit pattern that most closely approximates the decimal value, i.e. the binary representation that has the least error compared to the input value.

Yes... which was the whole       point of my post... and it will NEVER store 1/3 accurately.

Quote
Similarly, when a binary value is converted back to decimal, the computer is supposed to produce the decimal string that most closely corresponds to the binary value, again the decimal representation that has the least error compared to the original binary value.

So it may happen, when going through the sequence of operations in your test program, that 1.0000000 represents the binary result of the computation with less error than 0.9999999. In that case, the computer will output 1.0000000.

*This* was where I messed up.  I did not expect the value to correct itself on the way back.

But in essence the computer is making 2 mistakes that just happen to cancel out.

1/3 cannot be represented in binary, so the number it does come up with is WRONG.

When it then multiplies that number by 3, the answer should NOT be 1, so that is also WRONG.

This isn't exactly computer specific either.  If you try and represent 1/3 in decimal without rounding, you can't do it.  Any number you write down to represent that 1/3 will be wrong.  If you then multiple that wrong number you picked by three it should never produce 1.  If it does, you are wrong.

Unless you round it.

Quote
(The fact that you have asked for 20 decimals in the output has just caused the computer to append another 13 or so zeros to the actual answer. It was pretty much as waste of time doing so, as it doesn't change the computation in any way.)

Oh FFS.  I had a choice.  I could work out how many decimal places the float would store at that magnitude or I could just stick in 20 to be sure it was BEYOND it's limits.  Seriously.  Typing 20 was a lot faster and I don't think I smoked my CPU making it add a few more zeros.  No cores were harmed making it do an extra few dozen instructions.

Also, if it hadn't of produced 1, or if the number happened to be 1 only if it was rounded to the 19th place, I would not have seen it.  Of course I could against calculate how many decimal places it can store at that order or magnitude, taking me 10-15 minutes, or I could just type "20".

Quote
This was all very clearly illustrated by hans, who did the calculation here:

Anyway, you could do the arithmetic by hand, of course:
1.0 -> exact, exponent=0, mantissa=1.0
0.33 (reoccuring) -> best approximated as exponent=-2, mantissa=1.01010101010101010101011
3.0 -> exact, exponent=1, mantissa=1.1

In order to do the multiplication, you can just multiply both mantissa's, add the exponents and finally normalize the number.
Multiply mantissa; consists of 2 additions: 1.01010101010101010101011 + 0.101010101010101010101011 =~ 10.0
The addition of both exponents -2+1 = -1
We see that the new mantissa is not normalized -> in order to normalize you shift the mantissa 1 to right, add 1 to exponent. Now we have mantissa=1.0 and exponent=0
Which means we get the same exact result back, in this case... I think this is a coincidence.

Yes.  In fairness if I had of wanted to do that I would have to dust off a few text books, or just do a bit of google searching, but even as Hans points out it's likely a coincidence.

Do not be fooled by this example giving the correct answer back, the reason I was not expecting it as I know full well how terrible floats and doubles are at storing numbers.  Especially when you are working with larger numbers and trying to retain precision.  Or when multiplying and dividing things repeatedly.  In this case it worked out okay and I didn't expect that.  In other examples it will not work out that way.  The two wrongs will not make it right.


« Last Edit: April 03, 2018, 03:15:22 am by paulca »
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Wasn't expecting this... C floating point arithmetic
« Reply #38 on: April 03, 2018, 03:38:08 am »
Here are two articles that should make everything a lot clearer:

What Every Programmer Should Know About Floating-Point Arithmetic

http://www.phys.uconn.edu/~rozman/Courses/P2200_15F/downloads/floating-point-guide-2015-10-15.pdf

What Every Computer Scientist Should Know About Floating-Point Arithmetic

http://www.itu.dk/~sestoft/bachelor/IEEE754_article.pdf

Read the first link  ;)
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 
The following users thanked this post: paf, agehall, Jacon

Online IanB

  • Super Contributor
  • ***
  • Posts: 11891
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #39 on: April 03, 2018, 03:46:05 am »
But in essence the computer is making 2 mistakes that just happen to cancel out.

1/3 cannot be represented in binary, so the number it does come up with is WRONG.

When it then multiplies that number by 3, the answer should NOT be 1, so that is also WRONG.

Computers do not make mistakes. They do exactly what their specification says they shall do. (Unless you are claiming a bug or defect in implementation, which will require strong evidence on your part.)

Quote
This isn't exactly computer specific either.  If you try and represent 1/3 in decimal without rounding, you can't do it.  Any number you write down to represent that 1/3 will be wrong.  If you then multiple that wrong number you picked by three it should never produce 1.  If it does, you are wrong.

Unless you round it.

Well, maybe. Maybe not. I can represent 1/3 in decimal as 0.33333... recurring. If I multiply that by 3 I will get 0.99999... recurring. And 0.99999... recurring is mathematically defined as identical to 1.00000... recurring. No rounding needed.

Quote
Do not be fooled by this example giving the correct answer back, the reason I was not expecting it as I know full well how terrible floats and doubles are at storing numbers.

Says who?

Quote
Especially when you are working with larger numbers and trying to retain precision.  Or when multiplying and dividing things repeatedly.  In this case it worked out okay and I didn't expect that.  In other examples it will not work out that way.  The two wrongs will not make it right.

This is not correct. As someone else has already observed in this thread, floating point numbers retain their precision when you multiply and divide them. The whole point of floating point representation is that large numbers retain the same precision as small numbers.

It is addition and subtraction that causes problems.
 

Online MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
Re: Wasn't expecting this... C floating point arithmetic
« Reply #40 on: April 03, 2018, 03:54:18 am »
Testing:
Code: [Select]
#include <stdio.h>
#include <math.h>
 
int main() {

    double divisor, multiplier;
    scanf("%lf %lf", &divisor, &multiplier);
    double a = pow(divisor, -1.0);
    double b = a * multiplier;
    printf( "%.20f * %.2f = %.20f\n", a, multiplier, b  );
}

Outputs:
Code: [Select]
~/Desktop$ ./a.out
9191919191919191 9191919191919191
0.00000000000000010879 * 9191919191919192.00 = 1.00000000000000000000
Obviously, that I overrun precision of the "doubles" - see 92 instead of entered 91 , but result still suspiciously correct.
What is more, if I copy "0.00000000000000010879 * 9191919191919192.00" and manually enter into Calculator (same hardware, and bits width) I'm getting 0.999988889. Difference enormous, fifth  digit after coma is wrong. I'd think, that FPU holds  intermediate results of the calculation, and does somekind of smart-ass correction during consecutive steps of calculation. When Calculator does a math, w/o this "additional track data" fails to output 1.0000
 

Online ejeffrey

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #41 on: April 03, 2018, 04:18:22 am »
Do not be fooled by this example giving the correct answer back, the reason I was not expecting it as I know full well how terrible floats and doubles are at storing numbers.  Especially when you are working with larger numbers and trying to retain precision.  Or when multiplying and dividing things repeatedly.

I don't think you actually understand at all the limitations of floating point, and how they are different from fixed point.  I think you have just heard that, and are repeating it without understanding.

In particular, multiplication and division, even "repeatedly" are not major problems.  Multiplication and division individually generate less than 1 ULP error per operation. Even after many operations that tends to not accumulate too much.

The big problems in floating point more generally come down to either subtraction of numbers of very close magnitude, or adding numbers of very different magnitude.  In order to do addition or subtraction, all three operands (in and out) have to be adjusted to the same exponent.  If one of them is much smaller magnitude than the others it will have many fewer significant digits than the format suggests.  This loss of precision can be harmless or totally mess up your algorithm, depending on what you are doing.
 
The following users thanked this post: Jacon

Offline hans

  • Super Contributor
  • ***
  • Posts: 1640
  • Country: nl
Re: Wasn't expecting this... C floating point arithmetic
« Reply #42 on: April 03, 2018, 07:53:17 am »
Testing:
Code: [Select]
#include <stdio.h>
#include <math.h>
 
int main() {

    double divisor, multiplier;
    scanf("%lf %lf", &divisor, &multiplier);
    double a = pow(divisor, -1.0);
    double b = a * multiplier;
    printf( "%.20f * %.2f = %.20f\n", a, multiplier, b  );
}

Outputs:
Code: [Select]
~/Desktop$ ./a.out
9191919191919191 9191919191919191
0.00000000000000010879 * 9191919191919192.00 = 1.00000000000000000000
Obviously, that I overrun precision of the "doubles" - see 92 instead of entered 91 , but result still suspiciously correct.
What is more, if I copy "0.00000000000000010879 * 9191919191919192.00" and manually enter into Calculator (same hardware, and bits width) I'm getting 0.999988889. Difference enormous, fifth  digit after coma is wrong. I'd think, that FPU holds  intermediate results of the calculation, and does somekind of smart-ass correction during consecutive steps of calculation. When Calculator does a math, w/o this "additional track data" fails to output 1.0000

That is because this example doesn't show the strength of floating point at all. In a 20 digit fixed point representation, you only have 5 significant digits in the left hand side of the calculation. All calculation errors start to appear at fifth digit. 32-bit floating points have approximately 6 to 7 significant digits, 64-bit floating points about 14, so you're nulling a fair bit of the operand in manual entry to calculator.

If we consider 32-bit floats:
1.08791 * 10^-16 * 9191919191919192 = 0.999998081
1.087912 * 10^-16 * 9191919191919192 = 0.999999919 (7 leading 9s)
1.0879121 * 10^-16 * 9191919191919192 = 1.000000011 (7 zeros)
etcetera
The last result would get rounded back to 1.0. The 1 before last would be 0.999999940395355224609375, exactly 1 LSB in mantissa from 1.0.
 
The following users thanked this post: MasterT

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #43 on: April 03, 2018, 10:38:25 am »
Do not be fooled by this example giving the correct answer back, the reason I was not expecting it as I know full well how terrible floats and doubles are at storing numbers.  Especially when you are working with larger numbers and trying to retain precision.  Or when multiplying and dividing things repeatedly.

I don't think you actually understand at all the limitations of floating point, and how they are different from fixed point.  I think you have just heard that, and are repeating it without understanding.

In particular, multiplication and division, even "repeatedly" are not major problems.  Multiplication and division individually generate less than 1 ULP error per operation. Even after many operations that tends to not accumulate too much.

The big problems in floating point more generally come down to either subtraction of numbers of very close magnitude, or adding numbers of very different magnitude.  In order to do addition or subtraction, all three operands (in and out) have to be adjusted to the same exponent.  If one of them is much smaller magnitude than the others it will have many fewer significant digits than the format suggests.  This loss of precision can be harmless or totally mess up your algorithm, depending on what you are doing.

I have real life experience of floating point errors.  I was just finding it difficult to produce the errors I was expecting with simple examples.

A real world example was writing a very simply banking operations.  Using Java BigDecimal that was returned from the SOAP API of the bank mainframe.  We simple deducted a transfer amount from it to display.  The result in one particular test was expected to be 20 turkish lira.  However the result we got was 20.000001... plus a load of garbage.  We didn't have time to explain it, we checked we were doing anything stupid, took the number as a string, truncated it and displayed it.  We had hours not days to fix it.

In the real world with real numbers and slightly poorly thought out code they appear quite often.  The REALLY bad thing about them is that they are fairly difficult to even know they are happening.  Not unless your unit tests are incredibly exhaustive.

But... here's one with addition.  It's not even high precision. 
Code: [Select]
int main() {
    float b = 0.1;
    float c = 0;
    for( int i=0; i<100; i++ ) {
        c += b;
    }
    printf( "%.20f", c );
}

Of course it "looks "contrived" to the academic who would say, "Well just multiple it by 100", but if a is being read in from a user, a bank account or a sensor and just happens to be returning numbers the computer doesn't like, it goes off.  Adding up all the VAT or transaction fees for instance.

Also I didn't necessarily mean multiplying and dividing a number by the same amount each time, I mean in real world elongated calculations in involving for loops and real world real numbers.  It is easy to forget, as you often don't print your intermediates that you are spanning large ranges of magnitude and precision.

So I still stand by, computer suck at maths, make simple errors, can't represent real world numbers very well and without trying to avoid such mistakes or identify them when they do happen your code can end up fairly badly out in it's calculations.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline daveshah

  • Supporter
  • ****
  • Posts: 356
  • Country: at
    • Projects
Re: Wasn't expecting this... C floating point arithmetic
« Reply #44 on: April 03, 2018, 10:47:53 am »
I understand financial applications, at least historically, used decimal fixed point to avoid these kind of disconcerting errors and match conventional accounting.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #45 on: April 03, 2018, 10:51:56 am »
Computers do not make mistakes. They do exactly what their specification says they shall do. (Unless you are claiming a bug or defect in implementation, which will require strong evidence on your part.)

Go and buy a top end gaming rig.  Read the spec very carefully.  Full fill all of it's requirements fully and then overclock it to it's maximum allowed figures, from the spec.

Now, run Prime95 on it for a day.

You may see that Prime95 will fail as the CPU has provided incorrect prime numbers.

Note these prime errors are not crashes, we are not talking about lock ups.  We are talking about arithmetic errors.  Also, by it's design Prime95 runs entirely in the core to remove the latency of memory access.

Most computers are sold, considerably throttled back from their "lab maximum" potential for this reason. 

Besides, it depends on your context as to what a "mistake" is.

If you calculate and add up all the VAT for a million orders and are out by 15 cents at the end, explaining the floating point spec the the accountant ... do you think he will say, "Oh, okay then, it's not a mistake", of course not.

If you run a PC 24/7 and due to a random burst of RF or a mains spike causes a bit error in the CPU which results in software crashing, in the academic world of hardware, electronics and text books you could argue that was not a mistake.  That is like saying because I am hungover that adding 1 and 2 to get 4 is not a mistake either.
« Last Edit: April 03, 2018, 10:54:25 am by paulca »
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #46 on: April 03, 2018, 11:01:13 am »
Here is another slightly contrived example, but it is one that a programmer could make so very, very easily.

Code: [Select]
int main() {
    float a = 1;
    double b = 1;
    a = a / 3.0f;
    b = b / 3.0f;
    if( a == b ) {
        printf( "Yes" );
        return 0;
    }
    printf("No");
}

and another...

Code: [Select]
int main() {
    float a = 1;
    float b = 1;
    float c = 10;
    a = a / c;
    b = b / c;
    b = b / c;
    b = b * c;
    if( a == b ) {
        printf( "Yes" );
        return 0;
    }
    printf("No");
}

« Last Edit: April 03, 2018, 11:04:21 am by paulca »
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline glarsson

  • Frequent Contributor
  • **
  • Posts: 814
  • Country: se
Re: Wasn't expecting this... C floating point arithmetic
« Reply #47 on: April 03, 2018, 11:19:36 am »
A properly educated programmer almost never compares floating point values using "==", only in very special and rare cases.

Also, don't use floating point to represent money. It leads to all sorts of nasty problems. A share price of 100.0000001 is not the same as 99.99999999999 even if displayed rounded to two decimal places. Expensive mistake...
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #48 on: April 03, 2018, 11:43:32 am »
A properly educated programmer almost never compares floating point values using "==", only in very special and rare cases.

Correct, it was to highlight they are not the same.  So if you take those two numbers and carry on with further calculations they may continue to diverge.

Academically if the two numbers should be the same, but aren't then using them in further calculations may produce more incorrect results.

Compound errors.

It takes very careful thought to arrange sequences of calculations to prevent compound errors resulting from these diverging approximations caused by floating point.

It takes even more careful and tedious testing to identify that your sums are wrong in the event it happens.

Quote
Also, don't use floating point to represent money. It leads to all sorts of nasty problems. A share price of 100.0000001 is not the same as 99.99999999999 even if displayed rounded to two decimal places. Expensive mistake...

It's funny you mention that.  Because I have worked in a stock exchange.  They DO use doubles to calculate financial values.  However they are not compared with == but with < and > but usually you would not be working with decimals that fine.  An example might be calculating the mid point (and other market data indicators) of a board.  Something that happens potentially hundreds of thousands of times a second for each instrument.

The only routine I can remember that wasn't just comparing numbers against a reference was calculating aggregate long/short position margins on a credit limit filter for which we were provided the equation in a PDF and the first thing it did was round down all the financial values to whole dollar amounts.  The filter was not concerned with you 29c orders, but your broker sitting on a long position of 10 million dollars, not offset against the short position to be able to settle.

There were also routines using fixed point mathematics.  Advanced math libraries, or even Math.* were not permitted in my domain as my code was critical order flow path.  No boost either.  Raw, low level C, sub microsecond latency wire to wire.  Thankfully most of the code just translated between different exchange and broker message formats and totalled up risk.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23024
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #49 on: April 03, 2018, 11:48:07 am »
We're "slow finance" and we use our own math library because we have all the time in the world :)

It's actually a typed constraint solver engine which uses rational, precise decimal types only and any loss of precision has to be manually accounted for i.e. conversion to and from double precision values. It generates C# code which is fast and has no assumptions in it or precision loss or human errors (assuming the input was correct)

edit: based on SICP 3.3.5: https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-22.html#%_sec_3.3.5
« Last Edit: April 03, 2018, 11:51:20 am by bd139 »
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #50 on: April 03, 2018, 11:55:36 am »
We're "slow finance" and we use our own math library because we have all the time in the world :)

It's actually a typed constraint solver engine which uses rational, precise decimal types only and any loss of precision has to be manually accounted for i.e. conversion to and from double precision values. It generates C# code which is fast and has no assumptions in it or precision loss or human errors (assuming the input was correct)

edit: based on SICP 3.3.5: https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-22.html#%_sec_3.3.5

Getting slightly off the track here, but I ran into an "Engineer versus Accountant" issue recently.

Calculating VAT.  I was trying to explain that if I calculate VAT per item on on invoice to high precision, the total VAT charged may "look" different to the VAT amount on the total bill.

Of course most accountants don't even realise that their spreadsheets and/or sage have strategies delibrately coded to do whatever the taxation system requires.

It took quite a lot of effort to finally get the answer that VAT (in the UK) is on a literal basis.  The tax office are not concerned that you rolled out 0.1p on an invoice due to rounding, they are concerned that if you charge the customer £1.39 VAT that you put that £1.39 VAT on your tax return.

But explaining the engineering problem to accountants was harder than expected.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23024
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #51 on: April 03, 2018, 12:42:56 pm »
Yes accounting rules are different and accountants are dicks.

That's one reason we use the constraint engine as it allows us to provide the precision rules to it unlike the native type system. One of the common problems I saw when I was working in ecommerce for a number of years was tax invoice values. When you sum 1000 order lines ex VAT then apply VAT with a 2 decimal places pricing then you get an issue that the per-line item VAT doesn't add up to the net sum always. When that happens on 10,000 orders, HMRC will own you just because they are arseholes. Also 9999 clients won't even blink an eye and just enter the invoice, but one will kick up a stink and shitpost all over twitter about how your product can't add. This is made even worse when incremental floating point rot creeps in. Incidentally no one gives a crap about this really.

That is until someone wants to be paid commission and they want it right to that tenth of a penny because it might round up into their pocket. They'll spend a week chasing half a penny.

Incidentally I charged them a lot to fix it, which was an expensive few pennies lost :)
 

Online IanB

  • Super Contributor
  • ***
  • Posts: 11891
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #52 on: April 03, 2018, 01:37:09 pm »
When you sum 1000 order lines ex VAT then apply VAT with a 2 decimal places pricing then you get an issue that the per-line item VAT doesn't add up to the net sum always.

Isn't there a special kind of rounding that can be used for each of the 1000 line item VAT entries so that the sum does add up to the total?
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #53 on: April 03, 2018, 01:45:02 pm »
When you sum 1000 order lines ex VAT then apply VAT with a 2 decimal places pricing then you get an issue that the per-line item VAT doesn't add up to the net sum always.

Isn't there a special kind of rounding that can be used for each of the 1000 line item VAT entries so that the sum does add up to the total?

This is what I was told, in the end. 

For each line item, calculate and round the VAT to the nearest penny, using natural rounding, aka >=0.5 round up.

Then for the order VAT total, total those values up.

It does however exist a possibility where the total on the order does not equal exactly 20% of the net pre-vat total.

HM Renevue and customs could pull you on this, but if you can demonstrate why it's calculated that way and you are accounting VAT only on exactly what you charged the customer they should be fine.

If however you are charging the customer by line item * VAT Rate and then putting the order total * VAT Rate onto your tax documents they may then charge you the differential and bitch at you a lot... because they are dicks.

Of course there are different ways, the important part is that what you show the customer on the bill is what you actually charged them and it is what you actually record for tax purposes.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21686
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Wasn't expecting this... C floating point arithmetic
« Reply #54 on: April 03, 2018, 01:48:20 pm »
Of course it "looks "contrived" to the academic who would say, "Well just multiple it by 100", but if a is being read in from a user, a bank account or a sensor and just happens to be returning numbers the computer doesn't like, it goes off.  Adding up all the VAT or transaction fees for instance.

...

So I still stand by, computer suck at maths, make simple errors, can't represent real world numbers very well and without trying to avoid such mistakes or identify them when they do happen your code can end up fairly badly out in it's calculations.

Guess I'll be repeating this once again:
There are no bad models, only incomplete models.

Or more specifically: there are no bad computers, only incomplete programs.

Again: you literally told the computer that you want to introduce rounding errors, by choosing floats.

If you had wanted exact values, you would've chosen integers, rationals, bignums, whatever.

Further, complaining that, by coincidence or providence, you still sometimes get the correct answer, despite having no guarantee of correct answers, is silly.

"Pray tell, Mr. Babbage, if one should enter the wrong number, will it still produce the correct answer?"

Indeed, reading such code, it seems clear that it was your intent to introduce these errors.  You cannot possibly complain about errors that you, yourself, have personally desired!  That's what it looks like from the outside.

Accounting, as an example, still cannot be free of rounding, because interest accrues decimal places very quickly.  Far faster than, say, the number of decimal positions even a large government's balance sheet might have (maybe 15 digits?).  Nevermind continuous compounding, which uses a transcendental function: essentially zero inputs will have exact (rounding-free) outputs.

Anyway, this is a manifestly solved problem: many countries are phasing out their smallest denominations (Canada for example having done so), and transactions are simply rounded (up or down) to the nearest now-smallest denomination (nickels).  Rounding, it's the law!

Tim
« Last Edit: April 03, 2018, 01:49:57 pm by T3sl4co1l »
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: hans, newbrain

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #55 on: April 03, 2018, 01:55:41 pm »
Guess I'll be repeating this once again:
There are no bad models, only incomplete models.

Or more specifically: there are no bad computers, only incomplete programs.

Again: you literally told the computer that you want to introduce rounding errors, by choosing floats.

We are arguing about semantics of the term "error" and "mistake".

If I build an abacus that has 11 beads, but if you use the 11th bead one of your beads falls off.  I tell you this, but you insist on using the 11th bead.  Is the abacus in error or the user?

I think that depends on the perspective.

This is where I  often come into conflict with other engineers.  I am a practicalist.  I want to get the job done to the highest quality per effort available.  I do not like pissing around with diminishing returns.  I do not like boiling away resources on pedantry over a 1% gain.  I do not like smart arses who insist on writing complicated code because they need tissues or gym socks to mop up after them.

So in my view the abacus is broken and one needs to be careful using it because of this.

In the pendants view however, such as a hardware engineer's view it is fine, it's an artiefact of how they designed it.  The end user couldn't give a shit how they designed it or why, they would prefer it worked as intuitively intended to get their job done.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Wasn't expecting this... C floating point arithmetic
« Reply #56 on: April 03, 2018, 02:30:17 pm »
The end user couldn't give a shit how they designed it or why, they would prefer it worked as intuitively intended to get their job done.

This is exactly what the job of the engineer is - to isolate the end user of all the peculiarities and idiosyncrasies.

For some reason, engineers often don't do it. For example, when you enter a credit card number, many of the forms will not let you enter spaces or dashes - and this after 20+ years in the making.

However, the engineer itself must be able to deal with technical stuff, and if something is not right, the engineer will be better off figuring out the root cause instead of laying the blame around.

Infinite precision cannot be achieved by finite numbers. Floating point format is one of the methods to deal with that. If you don't understand how it works, you can read the standard, you can look at binary presentation and figure out all the peculiarities. If, after you understand the mechanism, you don't like it, come up with your own method, if you can.

 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21686
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Wasn't expecting this... C floating point arithmetic
« Reply #57 on: April 03, 2018, 03:14:28 pm »
Sorry to disappoint, but... your preference is simply impossible?  It seems rather... impractical, self-contradictory you might say? ???

If you don't like what a CPU cranks out, then fine, go... uh, sheesh... go curl up in the fetal position and cry?

Idunno man, you don't have a choice.  If it's not an option to:
1. Heroically correct those, "design quirks" shall we say (because that would take too much effort),
2. Choose a CPU with fewer quirks (but which has negligible market share, so would take too much cost or effort),
3. Create your own CPU without any quirks (because that would take way too much cost and effort),
Then what can you do?

Please let me know if I'm terribly misunderstanding things here!

I've always taken the view that, writing a program is solving a puzzle; building a bridge from the bricks and beams you're given; salvaging electronics and repurposing the components for a new function; etc.  Whatever the case, the challenge is to make do with the building blocks you have, to realize the desired solution.

I've always used this view, and it's always been reinforced by experience.  It's hard to think of any situation where it is not true.  Even the people making the CPUs, have to put up with the constraints imposed by their silicon process.  (Though, probably more importantly, they must put up with many less-than-ideal constraints, like compromising an otherwise magnificent x64 processor with stupid legacy 8086 instructions and semantics!)

Practicalism (by your paragraph of definition) fits nicely within this, as you simply do what you can, with what you have available, including time and money as well as material and technical resources.  There's no complaining about inconsistent hardware -- it's just what you have to work with.  There's no need to be frustrated by anything, there is only incomplete knowledge (incomplete documentation might be represented as a chance of failing to meet the constraints, rather than a material quantity), and the tedium of implementing something, whether it be the direct route, or via workarounds.

An argument from law, also goes here: ignorance of the law is not a defense.  If the law of the CPU is that so-and-so instruction produces this-and-that result plus quirks, then it's your own fault that you didn't know about it.  Mind, that remains true, even if nothing ever says what those quirks are.  (Intel has no idea how many binary input sequences produce wrong results -- CPUs are far too complex to test exhaustively.  Most FPU operations are in error by some amount, based on a statistical evaluation.  The chance of an unlucky input producing an output that violates the specification it was supposed to implement (like IEEE-754), is small, but is not zero.*)

*I don't know, maybe FPUs are actually provably-correct these days.  In that case, use something unprovable, like race conditions, or cache coherency or something.

There are many legal standards, that are not spelled out in any single law.  Many are an ad-hoc patchwork of case history, and a few laws, with an outcome deeper than the sum of its parts.  A lawyer must know these structures, just as an EE must know, say, that ceramic capacitors don't meet ratings under bias; or... that an SE must know that PHP is a clusterfuck. ;D

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Online MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
Re: Wasn't expecting this... C floating point arithmetic
« Reply #58 on: April 03, 2018, 03:32:07 pm »
My OS periodically notifies me that "CPU microcodes were updated". Seems, we don't have hard-wired CPU /FPU /ALU anymore, but software-defined. This makes me think, that security issue concern may not be just floating point math related, but wider, you can not trust someone, who write those microcodes that integer math is not bugged. And reading assembler listing would not help much, since all low level instructions  like mov-mults-summ etc. may produce something that you would not expect. 
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Wasn't expecting this... C floating point arithmetic
« Reply #59 on: April 03, 2018, 03:33:50 pm »
@T3sl4co1l: Fully agree.

Floating point numbers and operations on them are approximations. They are NOT mathematically correct in a strict sense. So this concept of "mathematical correctness" can be set aside.

FP has its uses when dealt with carefully. But if you want exactness (without having to spend a lot of time determining how to use FP to get exact results in your particular case), use integers. As we already said, there are numerous ways of using integers.

As an example, I would never use FP numbers for financial purposes. A 64-bit integer can represent in an exact manner over 184467440 billions of a money down to the cent, which is an awful lot more than the GWP. It probably fits most uses.
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23024
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #60 on: April 03, 2018, 03:39:28 pm »
Yep. For ref, our internal representation of financial values is stored as follows as a study case:

struct FD {
    ulong Value;
    byte Precision;
    Currency Currency;
}

This represents the absolute value in the DP position so £1850.99 would be:

Value = 185099
Precision = 2
Currency = Currency.GBP

To add those together, you have three steps:

1. Assert the currency is the same.
2. Shift the lowest precision value up in precision to the highest level by multiplying it by 10^(precisionmax - precision min). Checking for overflows of course.
3. Add the numbers together

So much complexity for simple operations when you need precision.
 

Offline Nerull

  • Frequent Contributor
  • **
  • Posts: 694
Re: Wasn't expecting this... C floating point arithmetic
« Reply #61 on: April 03, 2018, 05:13:48 pm »
My OS periodically notifies me that "CPU microcodes were updated". Seems, we don't have hard-wired CPU /FPU /ALU anymore, but software-defined. This makes me think, that security issue concern may not be just floating point math related, but wider, you can not trust someone, who write those microcodes that integer math is not bugged. And reading assembler listing would not help much, since all low level instructions  like mov-mults-summ etc. may produce something that you would not expect.

How is that any different than hardware? The Pentium had a busted FPU in 1993, buggy CPUs aren't a new problem.
 

Offline sokoloff

  • Super Contributor
  • ***
  • Posts: 1799
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #62 on: April 03, 2018, 05:52:57 pm »
This is where I  often come into conflict with other engineers.  I am a practicalist.  I want to get the job done to the highest quality per effort available.
I'm not sure that C++ is the best language to accomplish the last sentence's stated goals. (And I say this as a very long time C++ programmer who has a lot of love for the "portable assembly language" that C and C++ represent.)
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23024
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #63 on: April 03, 2018, 05:55:37 pm »
Yes that’s for Common Lisp  :-DD

Going to hide back in my hole now :)
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8517
  • Country: us
    • SiliconValleyGarage
Re: Wasn't expecting this... C floating point arithmetic
« Reply #64 on: April 03, 2018, 06:15:07 pm »
How about this :

time for a new number format : Algebraic
The numbers are stored as arrays of bytes
any operation performed is not calculated but stored in the number array
when cast to a lower type the 'number' (the equation) is then first algebraically solved , and then the 'calculated' result returned to a lower type .
you can ask for the full equation by asking for the 'true' number
you can ask for the algebraically reduced number using 'frac'

def x algebraic
def y algebraic
x = 1                'x now contains [1]
x = x /3            ' x now contains [1/3]
y = x *3            ' y now contains [(1/3)*3]
x = sqr(x)         ' x now contains [S(1/3)]
x = x^2            ' x now contains [(S(1/3))^2]
x = x *3            ' x now contains [((S(1/3))^2)*3]
y = y * x           ' y now contains [((1/3)*3)*((S(1/3))^2)*3]
print x              3  ' resolves to smallest format that can store in this case an integer
print x.tofloat    3.0
print y.tofloat    9.0
print x.true       ((S(1/3))^2)*3  ' this returns the actual stored sequence

when you ask for the 'value' the array is parsed algebraically to it smallest format an then it is calculated. if the result is a 'endless number' it can be returned as a fraction
For example:
x = 1/3
x = x *2
print x.float ' prints 0.666666666666
print x.true  ' prints (1/3)*2
print x.frac  ' prints 2/3       this reduced the equation without calculating any 'precision unsafe' operations like SQR and div it and shows the outcome
x = x.frac         ' reduces the algebraic equation to its simplest form and stores that.
print x.true  2/3

The format would support ln, log, sin, cos, tan, j (imaginary) and other commonly found things like e and pi.

What are the coders waiting for ?
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #65 on: April 03, 2018, 06:22:49 pm »
Sorry to disappoint, but... your preference is simply impossible?  It seems rather... impractical, self-contradictory you might say? ???

If you don't like what a CPU cranks out, then fine, go... uh, sheesh... go curl up in the fetal position and cry?

No, not really.  By accepting the short comings of floating point and learning to treat them cautiously I know where to pay especial attention... because computers suck at maths.

This is all I'm saying.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Online IanB

  • Super Contributor
  • ***
  • Posts: 11891
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #66 on: April 03, 2018, 06:28:26 pm »
...because computers suck at maths.

This is all I'm saying.

You have no evidence that computers suck at maths any more than humans with pen and paper suck at maths.

So really you are saying nothing at all. This whole thread is a waste of energy.
 
The following users thanked this post: Jacon

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21686
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Wasn't expecting this... C floating point arithmetic
« Reply #67 on: April 03, 2018, 06:52:17 pm »
Hey Ian, could you indulge me in a hypothetical for a moment? :D

Pray tell, what is the square root of two?  Please write out twenty decimal places.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #68 on: April 03, 2018, 06:58:15 pm »
Pray tell, what is the square root of two?  Please write out twenty decimal places.

It's an interesting example and a university assignment I had to do in assembler.  Thankfully to the closest integer, the square root of any number.

The traditional way computers do it is much like a school boy would.  Try, compare, adjust, try, compare.  Though with much smarter searching logic.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Online IanB

  • Super Contributor
  • ***
  • Posts: 11891
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #69 on: April 03, 2018, 06:59:50 pm »
Hey Ian, could you indulge me in a hypothetical for a moment? :D

Pray tell, what is the square root of two?  Please write out twenty decimal places.

Tim

Isn't that my point? Clearly it's much easier for a computer to do that than a human. So to say "computers suck at maths" when they are evidently much better at it than humans is a rather pointless statement.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #70 on: April 03, 2018, 07:06:33 pm »
Hey Ian, could you indulge me in a hypothetical for a moment? :D

Pray tell, what is the square root of two?  Please write out twenty decimal places.

Tim

Isn't that my point? Clearly it's much easier for a computer to do that than a human. So to say "computers suck at maths" when they are evidently much better at it than humans is a rather pointless statement.

Okay.  Lay up now.  It's getting old.  Lets say that computers make approximations in maths which may cause your program to error out and to be aware of that.

Out of interest... I gave up at 6 decimal places, but:
Code: [Select]
1.5 * 1.5
2.2
1.4 * 1.4
1.9
1.45 * 1.45
2.10
1.42 * 1.42
2.01
1.415 * 1.415
2.002
1.412 * 1.412
1.993
1.413 * 1.413
1.996
1.414 * 1.414
1.999
1.4145 * 1.4145
2.0008
1.4142 * 1.4142
1.9999
1.4143 * 1.4143
2.0002
1.41425 * 1.41425
2.00010
1.41422 * 1.41422
2.00001
1.41421 * 1.41421
1.99998
1.414215 * 1.414215
2.000004
1.414212 * 1.414212
1.999995
1.414214 * 1.414214
2.000001
1.414213 * 1.414213
1.999998                                                                                                         
1.4142135 * 1.4142135                                                                                           
1.9999998                                                                                                       
1.4142138 * 1.4142138
2.0000006                                                                                                       
1.4142137 * 1.4142137
2.0000003                                                                                                       
1.4142136 * 1.4142136
2.0000001
1.4142135 * 1.4142135
1.9999998
1.41421355 * 1.41421355
1.99999996
1.41421356 * 1.41421356
1.99999999
1.41421357 * 1.41421357
2.00000002
1.414213565 * 1.414213565
2.000000007
1.414213562 * 1.414213562
1.999999998
1.414213563 * 1.414213563
2.000000001
1.4142135625 * 1.4142135625
2.0000000003

Is, as I understand it how computers calculate roots.  Obviously there are optimisations and hardware dedicated to this process available, but it's all just trial and error.

To take things a little off the rails.  Here is a question for the egg head grey beards...

How does a CPU divide by 3?
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline 691175002

  • Regular Contributor
  • *
  • Posts: 64
Re: Wasn't expecting this... C floating point arithmetic
« Reply #71 on: April 03, 2018, 07:31:26 pm »
Like everyone else, I also took numerical computation in university and did all the operations by hand.  I also retained nothing other than a broad sense of caution when dealing with floating point numbers.

I suspect 1/3 is simply a combination that works by chance.  If you test your sample code with other values (such as 1/11) it fails as expected.

Code: [Select]
Python 3.5.2 |Anaconda 2.5.0 (64-bit)| (default, Jul  5 2016, 11:41:13) [MSC v.1
900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy as np
>>> a = np.float16(1) / np.float16(3)
>>> a*np.float16(3)
1.0

>>> a = np.float16(1) / np.float16(11)
>>> a * np.float16(11)
0.99951
>>>

Quote
How does a CPU divide by 3?
IIRC It multiplies by 1/3 so that it can reuse the hardware.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #72 on: April 03, 2018, 08:01:40 pm »
Like everyone else, I also took numerical computation in university and did all the operations by hand.  I also retained nothing other than a broad sense of caution when dealing with floating point numbers.

Exactly.  I hold several University degrees in computing and technology.  This caution is all you need from day to day.

But we are in the presence of academics and pedants who believe that every single instruction you execute on a computer needs understanding to the bit depth on a floating point unit and failure to understand this detail is a cardinal sin which means you are not a real programmer.

This might be fine for a few hundred instructions in an MCU, but a few billion in a distributed real system with 100s of programmers, it's no longer practical.  So you need to trust and set boundaries within which to stay so you do not encounter "The dragons."

While they fettle about with that level of detail I will get actual work done, produce actual products, make actual money and get paid a shed ton of money doing so.

That said, if I had to pick a perfect software team, I would have 1 IanB, the grumpy technical pedantic engineer, 1 me, the practicalitst technical engineer, 1 business sympathetic technical engineer.  3 cookie cutter senior engineers to do the bulk of the work and 3 juniors to do the dirty work and mould into seniors.  So while the pedant approach is something I hate, it is useful to have, it can just sometimes be difficult to weigh their whinging against actual risk versus effort.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #73 on: April 03, 2018, 08:12:01 pm »
Isn't that my point? Clearly it's much easier for a computer to do that than a human. So to say "computers suck at maths" when they are evidently much better at it than humans is a rather pointless statement.

"Easier", "Better", "Pointless" are subjectives.

The computer is FASTER than a human, buy many orders of magnitude, but it is not, by far, better.  A computer will execute billions of mistakes faster than a human can make one.

So a lot of the maths they do is based on this fact.  The will happily make a million trial and error calculations to get an exact value a human could narrow in on in a few iterations.

EDIT:  A lot of software engineering, is about realising this.  Sorting routines, matching and filtering routines can seem horribly difficult to devise for a junior engineer, the senior realises some of the things "you" find hard are just repetitive and tedious, two things a computer has no trouble doing.  So the solution is often to just use the sheer speed of the computations to solve things brute force.

It depends on those subjective qualifiers if you consider this better.

You will note that computers rarely do maths, they simply do computation (it's kind of in the name and where it came from).  But that's a whole other kettle of fish.
« Last Edit: April 03, 2018, 08:18:18 pm by paulca »
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21686
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Wasn't expecting this... C floating point arithmetic
« Reply #74 on: April 03, 2018, 08:32:14 pm »
Pray tell, what is the square root of two?  Please write out twenty decimal places.

Out of interest... I gave up at 6 decimal places, but:
Code: [Select]
...
1.4142135625 * 1.4142135625
2.0000000003

Is, as I understand it how computers calculate roots.

You idiot!  It's not correct!  The square root of two is not, and can never be, 1.4142135623730950488!  What a stupid computer!

(Do you see how strange this looks?)

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline ogden

  • Super Contributor
  • ***
  • Posts: 3731
  • Country: lv
Re: Wasn't expecting this... C floating point arithmetic
« Reply #75 on: April 03, 2018, 08:38:09 pm »
The computer is FASTER than a human, buy many orders of magnitude, but it is not, by far, better.  A computer will execute billions of mistakes faster than a human can make one.

So a lot of the maths they do is based on this fact.  The will happily make a million trial and error calculations to get an exact value a human could narrow in on in a few iterations.

 :palm:

Human can program computer to do exactly what he (human) is going to do in a few iterations, in result computer will perform identical calculation and give identical to human, result.

You really shall learn more about computers and programming.
 

Online MasterT

  • Frequent Contributor
  • **
  • Posts: 785
  • Country: ca
Re: Wasn't expecting this... C floating point arithmetic
« Reply #76 on: April 03, 2018, 08:38:34 pm »
My OS periodically notifies me that "CPU microcodes were updated". Seems, we don't have hard-wired CPU /FPU /ALU anymore, but software-defined. This makes me think, that security issue concern may not be just floating point math related, but wider, you can not trust someone, who write those microcodes that integer math is not bugged. And reading assembler listing would not help much, since all low level instructions  like mov-mults-summ etc. may produce something that you would not expect.

How is that any different than hardware? The Pentium had a busted FPU in 1993, buggy CPUs aren't a new problem.
This is completely different matter. Hardware bugs you could discover by running test Before you set up multi-billion  account. And even if it was not discovered, than after crime happened, special task team could find a bug and use it as evidence.
Software microcode hack is much harder to prove. It could be remotely activated at specific time. You could have thousands  programmers verifying by hand &pencils every bit and line in the assembly code, but all this hard-work is useless. 
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #77 on: April 03, 2018, 09:01:41 pm »
I think the point is the computer uses it's best skill, iterative, tedious computation.  A human with more mathematical wit than  me will realise they are quicker ways, that just require the kind of dynamic thought processes that computers don't have.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23024
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #78 on: April 03, 2018, 09:24:49 pm »
That's what JIT is for.
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #79 on: April 03, 2018, 09:51:51 pm »
Quote
time for a new number format : Algebraic

It's not new.  Something like this dates back to the 70s... (Macsyma/Maxima.)

Code: [Select]
(%i12) 1/(2*%pi) + 1/(6*%pi);
                                       2
(%o12)                               -----
                                     3 %pi
 
The following users thanked this post: newbrain

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Wasn't expecting this... C floating point arithmetic
« Reply #80 on: April 03, 2018, 11:08:54 pm »
It is not really anything to do with floating point... you can't give precise answers to quite a few real-world questions, including calculating taxes.

For example here is an easy question:

If want to have $1000 in a savings account for Christmas, every christmas, what is the minimum I need to save each week?
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Wasn't expecting this... C floating point arithmetic
« Reply #81 on: April 03, 2018, 11:11:57 pm »
It is not really anything to do with computers and floating point... you can't give precise answers to quite a few real-world questions, including calculating taxes.

For example here is an easy question:

If want to have $1000 in a savings account for Christmas, every Christmas, what is the minimum I need to save each week?
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline ogden

  • Super Contributor
  • ***
  • Posts: 3731
  • Country: lv
Re: Wasn't expecting this... C floating point arithmetic
« Reply #82 on: April 03, 2018, 11:13:24 pm »
computer uses it's best skill, iterative, tedious computation.

What are you smoking? Seriously.

Quote
A human with more mathematical wit than me will realise they are quicker ways, that just require the kind of dynamic thought processes that computers don't have.

If you are trolling, then please stop because it's too much already.
Otherwise please update your knowledge about computers - what they are and how they work.
Hitchhiker's Guide to the Galaxy is wrong way to learn about computers.
Use proper books that are not science fiction, instead.
« Last Edit: April 03, 2018, 11:19:07 pm by ogden »
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Wasn't expecting this... C floating point arithmetic
« Reply #83 on: April 03, 2018, 11:32:01 pm »
It is not really anything to do with computers and floating point... you can't give precise answers to quite a few real-world questions, including calculating taxes, no matter how hard you try.

For example, here is an easy question:

If I want to have $1000 in a money jar for Christmas, every Christmas, how much do I need to put in the jar each week?

Any takers to not use a pesky, imprecise, error-prone computer to give a precise answer?

Something better than the obvious (and very practical) $20 per week?
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Wasn't expecting this... C floating point arithmetic
« Reply #84 on: April 04, 2018, 12:31:14 am »
If I want to have $1000 in a money jar for Christmas, every Christmas, how much do I need to put in the jar each week?

If your account doesn't earn any interest then the best way is to save the whole $1000 a week before Christmas. This way you enjoy free access to all of your money during the year.

If you do earn the interest, then the precise answer depends on the rules and rounding that your bank applies, and also on the overall amount of your income and your tax bracket, because your tax on the interest depends on it. Thus, the precise answer involves quite a bit of calculations.

 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Wasn't expecting this... C floating point arithmetic
« Reply #85 on: April 04, 2018, 02:28:33 am »
If your account doesn't earn any interest then the best way is to save the whole $1000 a week before Christmas. This way you enjoy free access to all of your money during the year.

If you do earn the interest, then the precise answer depends on the rules and rounding that your bank applies, and also on the overall amount of your income and your tax bracket, because your tax on the interest depends on it. Thus, the precise answer involves quite a bit of calculations.

If you did the calculations, could you give me a precise answer that cannot be proved wrong in at least some cases? (which is really the point of the question). I don't think you can.

Arithmetic and mathematics at two different things. Computers are good at arithmetic, but mathematics? Meh, not so much.

Computers also have a level of quantization error. It is the programmers job to manage that issue, and what works really well in one use-case is not guaranteed to work in others. Sometimes the errors matter, sometimes the don't.  Sometimes things will just not work, like defining PI as a constant, and then seeing that sin(PI*i) is never a constant value for all possible integer values of i (which is mathematically true).

The world is filled with uncertainty, and imprecise answers. Even the atomic weight of Carbon changes depending things...

Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #86 on: April 04, 2018, 09:17:33 am »
computer uses it's best skill, iterative, tedious computation.

What are you smoking? Seriously.

Quote
A human with more mathematical wit than me will realise they are quicker ways, that just require the kind of dynamic thought processes that computers don't have.

If you are trolling, then please stop because it's too much already.
Otherwise please update your knowledge about computers - what they are and how they work.
Hitchhiker's Guide to the Galaxy is wrong way to learn about computers.
Use proper books that are not science fiction, instead.

I'm sorry, who are you?

You are saying that a "computer"'s best feature is not it's ability to compute?  It's kind of what they are by definition, it's even its name.  They replaced people that were called computers to calculate artillery shell trajectories which was laborious, tedious and error prone.
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Wasn't expecting this... C floating point arithmetic
« Reply #87 on: April 04, 2018, 10:05:02 am »
A submarine's best feature is its ability to swim underwater.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23024
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #88 on: April 04, 2018, 10:08:17 am »
@paulca: with you on this one.

Computers compute a tiny subset of all computations possible. Everything else is abstract and requires humans to map the abstraction onto the tiny subset. Sometimes someone manages to consolidate some of those abstractions into a program that generates the tiny subset (compiler / runtime / vm / CAS etc).

Computers are dumb as shit. Humans are the ones who are doing all the magic and leveraging the computer's advantage of doing simple and stupid things really quickly.
 

Offline ogden

  • Super Contributor
  • ***
  • Posts: 3731
  • Country: lv
Re: Wasn't expecting this... C floating point arithmetic
« Reply #89 on: April 04, 2018, 10:32:50 am »
I'm sorry, who are you?

One who did not quite connect with your abstract language "computer uses it's best skill" :D

Quote
You are saying that a "computer"'s best feature is not it's ability to compute?

I did not say that.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Wasn't expecting this... C floating point arithmetic
« Reply #90 on: April 04, 2018, 11:24:30 am »
The main point lying not in what can or cannot do a computer, but in numerical vs. symbolic calculation.
It's common to confuse the two.

For instance, something that looks as benign as 1/3 is typically a symbolic representation of a rational number. The numerical representation we sometimes use to describe it, "0.3333...", is nothing more than a notation that exactly means that this is a rational number (and thus has an infinitely repeating decimal pattern), and is equivalent to writing 1/3. It bears no extra meaning. Now when we cut the decimals to a finite number of places, we get out of the symbolic realm and we transform it into an approximation.

Again, an FPU is designed for numerical approximate computation only. But a computer is perfectly capable of dealing with symbolic computation when programmed properly.
The fact that modern CPUs don't integrate built-in symbolic computation is just a matter of low demand and probably unjustified extra complexity.

Granted it could be nice to have at least built-in rational number support and this may not cost that much. But demand is most likely low enough that vendors don't care.
 

Offline hamster_nz

  • Super Contributor
  • ***
  • Posts: 2803
  • Country: nz
Re: Wasn't expecting this... C floating point arithmetic
« Reply #91 on: April 04, 2018, 11:41:16 am »
Granted it could be nice to have at least built-in rational number support and this may not cost that much.
But all the interesting math is done using irrational numbers... :D
Gaze not into the abyss, lest you become recognized as an abyss domain expert, and they expect you keep gazing into the damn thing.
 

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: Wasn't expecting this... C floating point arithmetic
« Reply #92 on: April 04, 2018, 11:46:00 am »
built-in symbolic computation is just a matter of low demand and probably unjustified extra complexity

There has always been a great demand for the world's definitive system for modern technical computing. Mathematica is a good approximation of such a will, and Worfram has always provided the best CAS ever available for a technology, e.g. they released it for Irix, when SGI was a good choice. Now they release for rpi@linux, to also keep high the interest for those, typically students, who can't afford the full purchase.

Opensource has never had any valid alternative.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Wasn't expecting this... C floating point arithmetic
« Reply #93 on: April 04, 2018, 12:05:29 pm »
Of course there is demand for symbolic computation tools. There just isn't any justified demand for that to get built-in, hard-wired into CPUs. And that wouldn't even make much sense.

 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Wasn't expecting this... C floating point arithmetic
« Reply #94 on: April 04, 2018, 12:09:10 pm »
But all the interesting math is done using irrational numbers... :D

That's a point of view. A lot more can be done with rational numbers than most people think. Define "interesting".  :-DD

Again, when approximations are not good enough for your particular needs when using irrational numbers (most often they are when used properly), you can use some kind of symbolic computation instead. Tools and libraries abound.
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8517
  • Country: us
    • SiliconValleyGarage
Re: Wasn't expecting this... C floating point arithmetic
« Reply #95 on: April 06, 2018, 04:18:43 pm »
Quote
time for a new number format : Algebraic

It's not new.  Something like this dates back to the 70s... (Macsyma/Maxima.)

Code: [Select]
(%i12) 1/(2*%pi) + 1/(6*%pi);
                                       2
(%o12)                               -----
                                     3 %pi
So how come there is not a standard library for this ? and we are all still effing around with ieee7-whatever ?
i'm willing to guess that banks are NOT using ieee format ... Actually visual basic has the 'currency' format. specifically to avoid these rounding error things. small things accumulate very quickly if you are doing millions of transactions a day ... banks don't like to lose money due to imprecise maths...
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Wasn't expecting this... C floating point arithmetic
« Reply #96 on: April 06, 2018, 09:58:17 pm »
Quote
time for a new number format : Algebraic

It's not new.  Something like this dates back to the 70s... (Macsyma/Maxima.)

Code: [Select]
(%i12) 1/(2*%pi) + 1/(6*%pi);
                                       2
(%o12)                               -----
                                     3 %pi
So how come there is not a standard library for this ? and we are all still effing around with ieee7-whatever ?
i'm willing to guess that banks are NOT using ieee format ... Actually visual basic has the 'currency' format. specifically to avoid these rounding error things. small things accumulate very quickly if you are doing millions of transactions a day ... banks don't like to lose money due to imprecise maths...

Not quite. Banking calculations specify a particular rounding mode, one of many rounding modes.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline iMo

  • Super Contributor
  • ***
  • Posts: 4785
  • Country: pm
  • It's important to try new things..
Re: Wasn't expecting this... C floating point arithmetic
« Reply #97 on: April 06, 2018, 10:13:27 pm »
Banking apps do not use binary floating point. They use decimal floating point. There are CPUs with that option too (ie Power6/7/8/9/..)
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23024
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #98 on: April 06, 2018, 10:25:53 pm »
Banking apps do not use binary floating point. They use decimal floating point. There are CPUs with that option too (ie Power6/7/8/9/..)

Correct. Although perhaps strangely, a lot of the platforms that sit on the Z series (that aren’t using Java) don’t actually use the CPU support but use decNumber because it is portable and consistent across all architectures. No one has a z series on their desk...  : http://speleotrove.com/decimal/


Also that’s what the WP81 calculator uses, an HP calculator emulation on hardware. Interestingly HP used decimal arithmetic on their calculators too.
« Last Edit: April 06, 2018, 10:27:29 pm by bd139 »
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Wasn't expecting this... C floating point arithmetic
« Reply #99 on: April 06, 2018, 10:38:49 pm »
For banking applications, the best way is using integers representing the number of cents.

32-bit integers are not long enough to hold accounting numbers any more, but 64-bit integers still provide enough room, unless we get run-away inflation that is.
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23024
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #100 on: April 06, 2018, 10:49:26 pm »
Number cents isn't enough precision sometimes. Some unit values are far less than one cent so you need variable precision. If you look at the method I described here, it allows variable precision with persistence:

https://www.eevblog.com/forum/microcontrollers/wasn_t-expecting-this-c-floating-point-arithmetic/msg1469544/#msg1469544
 

Offline iMo

  • Super Contributor
  • ***
  • Posts: 4785
  • Country: pm
  • It's important to try new things..
Re: Wasn't expecting this... C floating point arithmetic
« Reply #101 on: April 06, 2018, 10:49:56 pm »
The banking apps use something like 34 decimal digits math (supported by the math co-processors ie. in P6+..).
Btw my wp-34s calculator uses decNumber too :)
Many of the older HP and TI calculators worked with decimal representation (and decimal CPUs), unfortunately with pretty low precision (compared to say 50 digits used by the wp-34s).
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23024
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #102 on: April 06, 2018, 10:51:25 pm »
wp34 that's the one. Not sure where I got WP81 from! Need more coffee :)
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Wasn't expecting this... C floating point arithmetic
« Reply #103 on: April 06, 2018, 10:53:12 pm »
For banking applications, the best way is using integers representing the number of cents.

32-bit integers are not long enough to hold accounting numbers any more, but 64-bit integers still provide enough room, unless we get run-away inflation that is.

From which we can infer that you have not been involved in specifying arithmetic for banking systems.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online IanB

  • Super Contributor
  • ***
  • Posts: 11891
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #104 on: April 06, 2018, 10:57:12 pm »
Re: Banking. Can anyone explain why decimal fractions and decimal rounding are better than binary fractions and binary rounding? (Since interest, fees and tax calculations must certainly incur fractions that need rounding.)
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23024
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #105 on: April 06, 2018, 11:09:36 pm »
Re: Banking. Can anyone explain why decimal fractions and decimal rounding are better than binary fractions and binary rounding? (Since interest, fees and tax calculations must certainly incur fractions that need rounding.)

It's not so much about rounding but precision. All financial values are rational and the denominator is 10^N where N defines the precision constant. Place-value systems or encodings (decimal) that support that denominator class consolidate with the results we desire and observe on paper. This comes from us having 10 fingers and being dumbasses for several thousand years. If we had 8 fingers, like in the simpsons, perhaps binary fractions would be better.

Incidentally there is no standard rounding rules. They are arbitrary so have to be programmed for the use case in question. This is why we wrote our own decimal numeric system which has precise, predictable performance and allows operations to have rounding algorithms applied on a case by case basis.

Good book on the history of this and why etc is Jan Gullberg's "Mathematics: from the birth of numbers" which describes the history of calculation, number systems etc as well as, well pretty bloody much everything. Wonderful book, written by a surgeon, not a mathematician, so it actually makes sense.

Edit: Might be the half bottle of wine bending my brain,  but it made me snigger thinking the above through.  99p shops would be called something like 111111 shops (because they were trying to undercut 1000000 shops) if we used binary! Perhaps 10 fingers was right after all.
« Last Edit: April 06, 2018, 11:20:36 pm by bd139 »
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #106 on: April 06, 2018, 11:48:09 pm »
Quote
So how come there is not a standard library for [symbolic rational numbers ala Macsyma] ? and we are all still effing around with ieee7-whatever ?
Because it's not generally required or even useful, and rather expensive, computationally?
Banking, which everyone is using as an example, seems to be some bastard union of integer "values" and various "rates" that aren't ("3.4% interest, compounded continuously"?)  Their rules look more aimed at preventing "cheating" than preserving accuracy in an absolute sense.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Wasn't expecting this... C floating point arithmetic
« Reply #107 on: April 06, 2018, 11:52:50 pm »
Number cents isn't enough precision sometimes. Some unit values are far less than one cent so you need variable precision. If you look at the method I described here, it allows variable precision with persistence:

https://www.eevblog.com/forum/microcontrollers/wasn_t-expecting-this-c-floating-point-arithmetic/msg1469544/#msg1469544

You can just use the smallest unit, whatever it is, as one. And make sure that the integer is big enough to represent the biggest amount.

No matter what you do, the rounding problem remains. Say, you have 1000000 users, and you want to calculate interest for them. You cannot tell them that their interest is 0.534566 cents, so you need to round it to whole cents somehow. If you use correct mathematical rounding, there will be an error in the total interest, so you either have to live with the error, or you will have to go back and correct the rounding for some of the clients. There's no other way. Of course, you can round the numbers down which brings you extra little profit every time (averaging 0.5 cents per client, $5,000 if you have 1000000 clients), but this doesn't give you the exact match neither. Either way, the rounding problem is fundamental and cannot be solved by using higher precision internally.

 

Offline iMo

  • Super Contributor
  • ***
  • Posts: 4785
  • Country: pm
  • It's important to try new things..
Re: Wasn't expecting this... C floating point arithmetic
« Reply #108 on: April 07, 2018, 12:17:31 am »
Customer XY
Your interest          0.00534566 USD
Rounded interest    0.01 USD
Transaction fees     1.00 USD
Total                   - 0.99 USD
 ;)
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 8646
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #109 on: April 07, 2018, 12:23:15 am »
Re: Banking. Can anyone explain why decimal fractions and decimal rounding are better than binary fractions and binary rounding? (Since interest, fees and tax calculations must certainly incur fractions that need rounding.)
Historically, there has been a feeling that if financial results from a computer do not exactly match what a human would get with pencil and paper, there would be lots of complaints from humans who have checked figures on computer printouts. I don't know if that ever turned out to be the case in practice.
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Wasn't expecting this... C floating point arithmetic
« Reply #110 on: April 07, 2018, 01:02:53 am »
Number cents isn't enough precision sometimes. Some unit values are far less than one cent so you need variable precision. If you look at the method I described here, it allows variable precision with persistence:

https://www.eevblog.com/forum/microcontrollers/wasn_t-expecting-this-c-floating-point-arithmetic/msg1469544/#msg1469544
You can just use the smallest unit, whatever it is, as one. And make sure that the integer is big enough to represent the biggest amount.
That doesn't work. Check component prices. I've seen resistors and capacitor prices with 4 or 5 digits after the decimal point. The thing is that the rounding should happen at the end and not in between at every stage.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline bson

  • Supporter
  • ****
  • Posts: 2270
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #111 on: April 07, 2018, 01:42:52 am »
(1/3)*3, answer 1.  This is correct mathematically, but not what floating point arithmetic gives you.

A computer should NOT be able to answer (1/3)*3 correctly using pure floating point arithmetic.  But it does give the correct answer.
You're converting a constant the compiler can format at compile time, so this is likely a bug in the compile-time printf implementation.

Try:

Code: [Select]
#include <stdio.h>

int main() {
  const float a = 1.0/3;
  const float b = 1.0 - 3.0*a;

  printf("b=%g\n", b);

  return 0;
}

Code: [Select]
$ gcc  -O0 -o foo3 foo3.c
$ ./foo3
b=-2.98023e-08
$ gcc  -O3 -o foo3 foo3.c
$ ./foo3
b=-2.98023e-08
$ gcc  -Os -o foo3 foo3.c
$ ./foo3
b=-2.98023e-08

On the other hand, the following produces "b=0":

Code: [Select]
#include <stdio.h>

int main() {
  const float a = 1.0/3;
  const float b = 3.0*a;

  printf("b=%g\n", 1.0 - b);

  return 0;
}

Actually, on second thought I wonder if it's not related to the second form performing 1.0 - b as a double and passing that to printf, while the former calculates b as a float, then promotes that to double for printf...
« Last Edit: April 07, 2018, 01:44:50 am by bson »
 

Online IanB

  • Super Contributor
  • ***
  • Posts: 11891
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #112 on: April 07, 2018, 03:04:44 am »
Actually, on second thought I wonder if it's not related to the second form performing 1.0 - b as a double and passing that to printf, while the former calculates b as a float, then promotes that to double for printf...

No, it's simply a peculiarity of binary arithmetic. For example, see below. There is no funny rounding or type conversion going on here:


 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Wasn't expecting this... C floating point arithmetic
« Reply #113 on: April 07, 2018, 03:11:39 am »
You can just use the smallest unit, whatever it is, as one. And make sure that the integer is big enough to represent the biggest amount.
That doesn't work. Check component prices. I've seen resistors and capacitor prices with 4 or 5 digits after the decimal point. The thing is that the rounding should happen at the end and not in between at every stage.

It certainly does. Scale it so that an integer 1,000,000 represents one dollar, and you've got 6 digits after the decimal point. Also note that this  eliminates binary vs. decimal controversy.

« Last Edit: April 07, 2018, 04:37:53 am by NorthGuy »
 

Offline bson

  • Supporter
  • ****
  • Posts: 2270
  • Country: us
Re: Wasn't expecting this... C floating point arithmetic
« Reply #114 on: April 07, 2018, 04:17:39 am »
A quick test.

Code: [Select]
#include <stdlib.h>

int main() {
  const float c = 1.0;

  abort();
}

Then:

Code: [Select]
: //trumpet ~ ; gcc -g -O0 -o foo foo.c
: //trumpet ~ ; gdb ./foo
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7_4.1
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /home/bson/foo...done.
(gdb) r
Starting program: /home/bson/./foo

Program received signal SIGABRT, Aborted.
0x00007ffff7a4d1f7 in raise () from /usr/lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install glibc-2.17-196.el7_4.2.x86_64
(gdb) up
#1  0x00007ffff7a4e8e8 in abort () from /usr/lib64/libc.so.6
(gdb) up
#2  0x0000000000400543 in main () at foo.c:6
6   abort();
(gdb) p c
$1 = 1
(gdb) whatis c
type = const float
(gdb) p &c
$2 = (const float *) 0x7fffffffe12c
(gdb) p $sp
$3 = (void *) 0x7fffffffe120
(gdb) p/cx *(char*)&c @ 4
$4 = {0x0, 0x0, 0x80, 0x3f}
(gdb) set var c = 1./3
(gdb) p/cx *(char*)&c @ 4
$5 = {0xaa, 0xaa, 0xaa, 0x3e}
(gdb) p c
$6 = 0.333333313
(gdb) set var *(int*)&c = 0x3eaaaaab
(gdb) p c
$7 = 0.333333343
(gdb) p c * 3.0
$8 = 1.0000000298023224
(gdb) set var *(int*)&c = 0x3eaaaaaa
(gdb) p c
$9 = 0.333333313
(gdb) p c * 3.0
$10 = 0.99999994039535522
(gdb)

From this you can see the 1./3 does not evenly round in binary, and hence there is a rounding error.
Multiplying it by 3 multiplies the rounding error, but it just still happens to be less than 1/2 lsb in a binary32 IEEE 754 float.
When doing the conversion in a parameter to printf, because the argument for a function call, especially a variadic one, is binary64 it gets promoted to binary64 (which is the C double).  And in double arithmetic the float-size rounding error becomes visible.
 

Offline paulcaTopic starter

  • Super Contributor
  • ***
  • Posts: 4051
  • Country: gb
Re: Wasn't expecting this... C floating point arithmetic
« Reply #115 on: April 07, 2018, 08:49:08 am »
Quote
So how come there is not a standard library for [symbolic rational numbers ala Macsyma] ? and we are all still effing around with ieee7-whatever ?
Because it's not generally required or even useful, and rather expensive, computationally?
Banking, which everyone is using as an example, seems to be some bastard union of integer "values" and various "rates" that aren't ("3.4% interest, compounded continuously"?)  Their rules look more aimed at preventing "cheating" than preserving accuracy in an absolute sense.

Most of my banking software experience was moving data around, rather than calculating it, but in the times I did calculate things there were specs so it was done consistently.

Surprisingly the specs were not that demanding on "how" you got the results, expect that the actual calculations should use "double precision", the specs were usually highly focused on precision and rounding at "point of record".

At the point you record a value into a so called record-of-truth it decouples from any previous calculations done to it.  You can't for example calculate the interest on an account as $0.12345, put $0.12 on their statement and then go ahead and increase the balance by $0.12345.  Similarly you can't do that the other way either put interest $0.12345 shown on the statement but only actually increase the balance by $0.12.

So what you round and record on a ledger/statement is the legal value.

Below is mostly assumption...

I haven't done compound interest calculations in a bank, but I would assume that when they say the interest is calculated daily and added monthly, the daily calculation "could" use higher precision than a cent/pence but the monthly aggregate amount added to your account will be in cent/pence rounded and the cent/pence account balance used in the next month.

If you open a savings account and put $1 into it and it has a 1% interest EPR rate and it says that interest is calculated daily but added yearly, which is common, then the daily interest is a very small number compared to cent/pence ($0.00002739726027... per day).  If they round those daily figures to cents/pence they will get 0.00 daily and your yearly interest would be 0.00 when they add up.  I have had savings accounts with virtually nothing in them and accrued interest.  Of course there is nothing saying that they actually calculate interest "daily" in real time.  They can of course loop through the account at the end of the year and take the closing (or peak if you are lucky) balance each day, keep a double precision number in memory and deposit the aggregate interest at the end of the year.... as a rounded cent/pence amount.  So potentially you could see floating point approximation errors in your interest.

It also opens questions about compounding resolution as well.  This is totally an assumption, but when they say they calculate the interest daily, but add it monthly, the compounding resolution would be monthly surely.  Non-compound per day based on the account balance, aggregated and added to the account at the end of the month where it then is considered in the next days calculation.

As a challenge you could of course download your bank statement and see if you can calculate the interest yourself, see how close you get to the banks figure with different techniques.
« Last Edit: April 07, 2018, 08:54:23 am by paulca »
"What could possibly go wrong?"
Current Open Projects:  STM32F411RE+ESP32+TFT for home IoT (NoT) projects.  Child's advent xmas countdown toy.  Digital audio routing board.
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8517
  • Country: us
    • SiliconValleyGarage
Re: Wasn't expecting this... C floating point arithmetic
« Reply #116 on: April 09, 2018, 05:27:32 am »
so if we have this imprecise math libraries : how the hell can we calculate the 27 millionth decimal of PI ? what kind of computational floating point package allows for that ?
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline agehall

  • Frequent Contributor
  • **
  • Posts: 383
  • Country: se
Re: Wasn't expecting this... C floating point arithmetic
« Reply #117 on: April 09, 2018, 05:46:07 am »
so if we have this imprecise math libraries : how the hell can we calculate the 27 millionth decimal of PI ? what kind of computational floating point package allows for that ?

Algorithms. It's not like that is done in one single computation or anything. You can compute anything on a computer as long as you understand how to work around the deficiencies in it. One such way is to simply construct algorithms that are adapted to computers using techniques like fixed point math and others that have been mentioned in this thread.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Wasn't expecting this... C floating point arithmetic
« Reply #118 on: April 09, 2018, 05:22:38 pm »
If you use an architecture where float and double are IEEE-754 binary32 and binary64, respectively, and integer and floating-point byte order is the same, you might find the attached float-bits command-line utility useful. It should compile with any C99 or later C compiler. In particular, it works fine on 32-bit and 64-bit Intel/AMD architectures.

Simply put, it shows the binary representation of any floating-point number, or the sum, difference, product, or division of a pair of numbers. Run it without arguments, and it shows the usage.

If you use Linux, you can compile and install it using e.g. gcc -Wall -O2 float-bits.c -o float-bits && sudo install -o root -g root -m 0755 float-bits /usr/local/bin/.

If we run float-bits -f 1/3 the output is
Code: [Select]
1/3 = 0.3333333432674408
  0 011111110 (1)0000000000000000000000
/ 0 100000001 (1)0000000000000000000000
= 0 011111010 (1)1010101010101010101011

With regards to the issue OP is having, the key is to look at the result: it is rounded up. (The mathematical evaluation rules in C are such that when the result is stored in a variable, or the expression is cast to a specific numeric type, the compiler must evaluate the value at the specified precision and range. This means that it is not allowed to optimize away entire operations.)

Note that if we run float-bits -f 0.99999997/3 the output is
Code: [Select]
0.99999997/3 = 0.3333333134651184
  0 011111101 (1)1111111111111111111111
/ 0 100000001 (1)0000000000000000000000
= 0 011111010 (1)1010101010101010101010

So, the three numbers closest to one third a single-precision floating-point number can represent are float-bits -f 0.33333332 0.33333334 0.33333336:
Code: [Select]
0.33333332: 0 011111010 (1)1010101010101010101010
0.33333334: 0 011111010 (1)1010101010101010101011
0.33333336: 0 011111010 (1)1010101010101010101100

Multiplying them by three (float-bits -f 3x0.33333332 3x0.33333334 3x0.33333336) yields
Code: [Select]
3x0.33333332 = 0.9999999403953552
  0 100000001 (1)0000000000000000000000
x 0 011111010 (1)1010101010101010101010
= 0 011111101 (1)1111111111111111111111
3x0.33333334 = 1.0000000000000000
  0 100000001 (1)0000000000000000000000
x 0 011111010 (1)1010101010101010101011
= 0 011111110 (1)0000000000000000000000
3x0.33333336 = 1.0000001192092896
  0 100000001 (1)0000000000000000000000
x 0 011111010 (1)1010101010101010101100
= 0 011111110 (1)0000000000000000000001

Essentially, when one writes 3.0f * (float)(1.0f / 3.0f) or something equivalent in C (using C99 or later rules), two implicit rounding operations occur. The first one rounds one third to the nearest value representable by a binary32 float up, and the second one rounds the slightly-over-one value to the nearest value representable by a binary32 float, down to exactly one. (Remember that these rounding operations operate on the floating-point number, and can at most add or subtract one unit in the least significant place.)

The answer to OP's question is then that this happens, because when implemented in floating-point math, there are two rounding operations done, and using the default rounding rules the two happen to cancel each other out, giving the unexpected, mathematically correct value.

Floating-point math is still exact math, it's just that after each operation, there is an implicit rounding to the nearest value representable by the used type. (However, there are "unsafe math optimizations" some compilers can do, which fuse multiple operations to one; and the FMA intrinsics are designed to do a fused multiply-add where only one rounding happens, at the end.)
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26906
  • Country: nl
    • NCT Developments
Re: Wasn't expecting this... C floating point arithmetic
« Reply #119 on: April 09, 2018, 05:28:44 pm »
The answer to OP's question is then that this happens, because when implemented in floating-point math, there are two rounding operations done, and using the default rounding rules the two happen to cancel each other out, giving the unexpected, mathematically correct value.
IMHO there is nothing unexpected here. When you use any kind of math on a computer you know the precission is limited so you have to figure out how many meaningfull digits you need and round the result accordingly. That way you will always get the result you expect.
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8517
  • Country: us
    • SiliconValleyGarage
Re: Wasn't expecting this... C floating point arithmetic
« Reply #120 on: April 09, 2018, 05:54:26 pm »
The answer to OP's question is then that this happens, because when implemented in floating-point math, there are two rounding operations done, and using the default rounding rules the two happen to cancel each other out, giving the unexpected, mathematically correct value.
IMHO there is nothing unexpected here. When you use any kind of math on a computer you know the precission is limited so you have to figure out how many meaningfull digits you need and round the result accordingly. That way you will always get the result you expect.
it would be fun to have logic gates where
1 and 1 is 99.999999999987485 % of the times 1
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Wasn't expecting this... C floating point arithmetic
« Reply #121 on: April 09, 2018, 06:32:19 pm »
IMHO there is nothing unexpected here.
I was trying to refer to how OP felt it was unexpected.  To those who use and apply C and IEEE-754/854 rules, there is nothing surprising or unexpected here.

My main point was that the floating-point math is well defined and exact (in the sense that the standard does not allow more than half an ULP of error for most operations; which means that the operations yield the exact same bit patterns on all standards-compliant architectures). It's just that the implicit rounding operations done (and required by standards) after each operation throw people off.

If you look at the Kahan summation algorithm at Wikipedia, check the Possible invalidation by compiler optimization section. With current compilers, even with the most aggressive optimizations used, one only needs a couple of casts to implement the algorithm correctly. This is because casts (expressions of form (double)(expression)) limit the precision and accuracy to the specified type (double), just like the implicit rounding I've mentioned. There is no need to try and use extra temporary variables or such.

There are other rules/functions that are extremely useful, too. For example, if you need to calculate an expression where a denominator may become zero, rather than test it explicitly beforehand, you can simply do the division, and check the result using isfinite() that the operation did not fail due to the divisor being too close to zero. (Unfortunately, this runs afoul of the "unsafe-math-optimizations" options for many compilers.) All you need to do is ensure math exceptions are disabled (using fesetenv()), so that your process won't keel over due to a floating-point exception.

All of this applies to microcontrollers, too, except that some settings might be hardcoded (and no fesetenv() available), depending on the base libraries.

Without hardware floating-point support, fixed-point math tends to be much faster than floating-point. (The logical value v is represented by an N-bit signed integer, round(v×2Q), where Q < N is the number of integer bits.) Operations on fixed-point numbers still involve implicit rounding  after every operation, but the lack of exponent (used in floating-point types) makes it much easier to implement.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21686
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Wasn't expecting this... C floating point arithmetic
« Reply #122 on: April 09, 2018, 08:35:38 pm »
it would be fun to have logic gates where
1 and 1 is 99.999999999987485 % of the times 1

I'd love to have a BER that low!

Indeed, as long as BER < 0.5, one can stack an arbitrary number of gates, error correction blocks, etc., to achieve arbitrarily high certainty.  It's the same as losing versus winning infinite money from gambling when the odds are only slightly in (or against) your favor.

Sooner or later we will have to understand stochastic computing: whether through the continued miniaturization of conventional logic with ever-shrinking thresholds, or the development of quantum computing, where errors are introduced by environmental (thermal) perturbations to the system state.  (That is, to implement a so-and-so-qubit calculation on a crappy computer, throw in however many times more qubits as error correcting functions, and pump the whole system.  Effectively, you'll be sinking the excess heat out of the error correcting blocks as information entropy, pushing the intended calculation towards its desired state.)

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf