Using them after a series of Float calculations would be completely ludicrous. For that you *would* want something around 10^-3 to 10^-4 at the end of a series of calculations such as solving a matrix.
The point was exactly that the choice of epsilon is not just a number; it has to have some qualitative meaning to the user. You, too, are now making the grave error of throwing around magic numbers without explaining
why. (I know why, you know why, most reading this do not know why.)
The 0.000000000000000000054210115f is the smallest one I'd use, and only with single-precision 2D/3D vector algebra, for the exact reasons I listed. It works absolutely fine for e.g. raytracing using single precision. (For OpenGL and similar, that have significant limitations in their Z buffer, you may consider a much larger epsilon, exactly because of the Z buffer range limitations. These involve differentiating numbers close to 1 from exactly 1.)
For single-precision linear algebra, we need to examine the operations performed. If we assume a square
n×
n matrix, LU decomposition involves sums of products. The substitution passes both involve a sum of products. So, a reasonable first estimate for an absolute epsilon would be the largest positive number that when raised to third or fourth power, yields zero. For single-precision IEEE-754 floating point, these are
0.0000000000000008881784f ≃ 1e-15 and
0.0000000000051448788f ≃ 5e-12. However, if we consider their reciprocals (infinite) too, then they are
0.00000000000014323644f ≃ 1e-13 and
0.00000000023283064 ≃ 2e-10 instead.
In practice, domain cancellation in the sums often reduce precision. (As an example of this, calculate the sum of 2e30, 1e-30, and -2e30. Unless your summation does something clever like reordering the terms in order of descending magnitude, you'll get a result of zero instead 1-30, even if using double precision.) So, the above must be understood in terms of
"when doing these kinds of operations, values smaller than this will produce zero or infinity".
For domain cancellation, a typical "fudge factor" is about one order of decimal magnitude, or three or four bits. To be precise, it depends on the number of terms in the sums, but also the variance of the terms in the sums; so a crude first estimate is often used.
A practical estimate for the absolute epsilon would be the largest positive value that when cubed, is still zero; and the reciprocal of the cube, squared, is still infinite. (You need to examine the various ways of solving Ax=b,
condition numbers, and exactly when the calculation may fail, to decide for yourself if that is a reasonable model for the complexity of the operations involved, with the "fudge factor" applied to deal with domain cancellation and similar rounding errors in the addition and subtraction operations.)
That is, it is the largest value that we want to treat as zero when dealing with a calculation that involves the squared reciprocal of its cube. This would be
0.000052322018 ≃ 5e-5 (still for single-precision floating-point). Thus, with the fudge factor applied, you end up with around
5e-4.
This is how you
reason towards a
reasonable absolute epsilon. You do not just pull it out of your hat, and rely on your authority or experience that it will work. There should always be a
reason you can put in a comment next to where you assign the value to your absolute epsilon in the code. And I do expect any numerically sane code to have that comment, too. Otherwise, it is just somebody's best guess.