Author Topic: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware  (Read 5962 times)

0 Members and 1 Guest are viewing this topic.

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
The following CC0-1.0 -licensed C99 code implements unity period sine (and cosine) functions using very few float operations, needing only a (144-byte) precalculated coefficient table.  The functions are
    usinf(x) = sin(2·π·x)
    ucosf(x) = cos(2·π·x)
While this is not the best, the fastest, or the smallest possible implementation, it is lightweight.  Compiling to Cortex-M4F or Cortex-M7F with hardware single-precision floating-point support, the code size is 250 to 300 bytes depending on your optimization options, plus the 144-byte lookup table, for a total of 400 to 440 bytes of Flash used.

All finite normal arguments x yields a result within ±7 ULPs of the expected value (verified for all 1,056,964,609 finite normal values between 0 and 1, i.e. one full period).  In other words, the result has at least 21 bits of precision; six significant decimal digits.  Also, for integer n, usinf(0.5·n) = 0 and usinf(0.5·n+0.5) = ±1; similarly for ucosf().
Code: [Select]
// SPDX-License-Identifier: CC0-1.0
// Nominal Animal, 2024

#include <stdint.h>
#include <math.h>

const struct {
    float  v;  // Value
    float  t;  // One third of tangent
} usinf_coefficient[18] = {
    { .v = 0.0f,        .t = 0.032724924f  },
    { .v = 0.09801714f, .t = 0.032567345f  },
    { .v = 0.19509032f, .t = 0.03209612f   },
    { .v = 0.29028466f, .t = 0.0313158f    },
    { .v = 0.38268343f, .t = 0.030233886f  },
    { .v = 0.47139674f, .t = 0.028860806f  },
    { .v = 0.55557024f, .t = 0.02720978f   },
    { .v = 0.6343933f,  .t = 0.025296709f  },
    { .v = 0.70710677f, .t = 0.023140015f  },
    { .v = 0.77301043f, .t = 0.020760471f  },
    { .v = 0.8314696f,  .t = 0.018180992f  },
    { .v = 0.8819213f,  .t = 0.0154264225f },
    { .v = 0.9238795f,  .t = 0.012523286f  },
    { .v = 0.95694035f, .t = 0.009499544f  },
    { .v = 0.98078525f, .t = 0.006384316f  },
    { .v = 0.9951847f,  .t = 0.0032076035f },
    { .v = 1.0f,        .t = 0.0f          },
    { .v = 0.9951847f,  .t = 0.0032076035f },
};

// 64 sectors form a full turn.
// phase = 0..1, and corresponds to fraction within sector.
float usinf_sector(int32_t sector, float phase) {
    int_fast8_t  index;

    // Bit 4 of the sector indicates phase inversion.
    if (sector & 16) {
        phase = 1.0f - phase;
        index = 15 - (sector & 15);
    } else {
        index = sector & 15;
    }

    // For the De Casteljau evaluation, we need the opposite phase.
    const float  iphase = 1.0f - phase;

    // Values at sector endpoints
    const float  v0 = usinf_coefficient[index].v;
    const float  v1 = usinf_coefficient[index+1].v;

    // One thirds of tangents at sector endpoints
    const float  t0 = usinf_coefficient[index].t;
    const float  t1 = usinf_coefficient[index+1].t;

    // Bezier points; b0 = v0, b3 = v1
    const float  b1 = v0 + t0,
                 b2 = v1 - t1;

    // First de Casteljau step, with z0 and z2 optimized.
    const float  z0 = v0 + (phase * t0),
                 z1 = (iphase * b1) + (phase * b2),
                 z2 = v1 - (iphase * t1);

    // Second de Casteljau step.
    const float  y0 = (iphase * z0) + (phase * z1),
                 y1 = (iphase * z1) + (phase * z2);

    // Final de casteljau step.
    const float  x0 = (iphase * y0) + (phase * y1);

    // Bit 5 of the sector indicates negation
    return (sector & 32) ? -x0 : x0;
}

float usinf(float x) {
    x *= 64.0f;  // Power-of-two multiplication is exact!

    int32_t sector = (int32_t)x;
    float   phase  = (x - (float)sector);
    if (phase < 0.0f) {
        phase += 1.0f;
        sector--;
    } else
    if (phase >= 1.0f) {
        phase -= 1.0f;
        sector++;
    }

    return usinf_sector(sector, phase);
}

float ucosf(float x) {
    x *= 64.0f;

    int32_t sector = (int32_t)x;
    float   phase  = (x - (float)sector);
    if (phase < 0.0f) {
        phase += 1.0f;
        sector--;
    } else
    if (phase >= 1.0f) {
        phase -= 1.0f;
        sector++;
    }

    return usinf_sector(sector + 16, phase);
}
The implementation is straightforward.  A full rotation is divided into 64 sectors.  One quarter sine wave is thus 16 sectors.  Each sector is described by the value at that point, and by the slope or tangent at that point (divided by three).  A cubic one-dimensional Bézier curve/function is used to interpolate within each sector, using the De Casteljau algorithm for better numerical stability and accuracy.

If we calculate the error distances in units in the least significant place in the expected value (as obtained by IEEE 754 double-precision sin() and cos() functions, multiplying the argument by two pi), the error counts among normal (subnormals excluded) IEEE-754 float values between 0 and 1 are as follows:
$$\begin{array}{l|r:r|r:r}
\text{ ULPs } & \text{ Count } & \text{ Fraction } & \text{ Count } & \text{ Fraction } \\
\hline
+7 & 90 & 0.000\% & 141 & 0.000\% \\
+6 & 11,250 & 0.001\% & 16,940 & 0.002\% \\
+5 & 181,952 & 0.017\% & 272,940 & 0.026\% \\
+4 & 982,297 & 0.093\% & 1,321,792 & 0.125\% \\
+3 & 3,346,595 & 0.317\% & 2,546,888 & 0.241\% \\
+2 & 19,657,519 & 1.860\% & 3,137,123 & 0.297\% \\
+1 & 325,906,619 & 30.834\% & 9,457,058 & 0.895\% \\
\hdashline
+0 & 581,617,000 & 55.027\% & 975,874,671 & 92.328\% \\
\hdashline
-1 & 76,779,488 & 7.264\% & 23,267,170 & 2.201\% \\
-2 & 25,569,180 & 2.419\% & 14,774,399 & 1.398\% \\
-3 & 15,692,769 & 1.485\% & 13,570,569 & 1.284\% \\
-4 & 5,872,646 & 0.556\% & 10,183,454 & 0.963\% \\
-5 & 1,229,700 & 0.116\% & 2,434,365 & 0.230\% \\
-6 & 115,202 & 0.011\% & 106,555 & 0.010\% \\
-7 & 2,297 & 0.000\% & 542 & 0.000\% \\
-8 & 3 & 0.000\% & 0 & 0.000\% \\
\end{array}$$
over all \$1,056,964,609\$ normal (non-subnormal) IEEE 754 single-precision (float aka Binary32) values between 0 and 1.

Instead of float, the same approach can be used with fixed-point types as well.  The De Casteljau is then best rewritten in terms of a linear interpolation helper function, as that can almost halve the number of multiplications needed.
« Last Edit: June 16, 2024, 07:45:15 pm by Nominal Animal »
 
The following users thanked this post: oPossum, mikerj, newbrain, RoGeorge, iMo, harerod, Picuino, wek

Offline Picuino

  • Super Contributor
  • ***
  • Posts: 1032
  • Country: es
    • Picuino web
Code: [Select]
/*
  C program for double floating point sin and cos.
  Calls modf.
  There are no error exits.
  Coefficients are #3370 from Hart & Cheney (18.80D).
*/
#include "cmath.h"

static const double p0 =  0.1357884097877375669092680e8;
static const double p1 = -0.4942908100902844161158627e7;
static const double p2 =  0.4401030535375266501944918e6;
static const double p3 = -0.1384727249982452873054457e5;
static const double p4 =  0.1459688406665768722226959e3;
static const double q0 =  0.8644558652922534429915149e7;
static const double q1 =  0.4081792252343299749395779e6;
static const double q2 =  0.9463096101538208180571257e4;
static const double q3 =  0.1326534908786136358911494e3;


static double sinus(double arg, int quad) {
   double x, x2, temp;
   int integ;

   /* Negative argument */
   if (arg < 0) {
      arg = -arg;
      quad = quad + 2;
   }

   /* Argument > 90 degrees */
   x = arg * TWO_PI;
   integ = (int) floor(x);
   x = x - integ;

   quad = (quad + integ) & 0x03;
   if (quad & 01)
      x = 1.0 - x;
   if (quad > 1)   
      x = - x;

   /* Compute sin of x */
   x2 = x * x;
   temp = ((((p4 * x2 + p3) * x2 + p2) * x2 + p1) * x2 + p0) * x;
   temp = temp / ((((x2 + q3) * x2 + q2) * x2 + q1) * x2 + q0);
   return(temp);
}

double cos(double arg) {
   if (arg<0)  arg = -arg;
   return(sinus(arg, 1));
}

double sin(double arg) {
   return(sinus(arg, 0));
}


EDIT:
9 multiplications and one division to get the sine or cosine of argument, in double precision.
« Last Edit: June 16, 2024, 08:04:06 pm by Picuino »
 
The following users thanked this post: oPossum, LaserSteve

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
9 multiplications and one division to get the sine or cosine of argument, in double precision.
Needs double precision support, and the argument is in radians, not in cycles, so that basically suggests replacing apples with oranges.
 

Offline KE5FX

  • Super Contributor
  • ***
  • Posts: 1991
  • Country: us
    • KE5FX.COM
Frequently when you're calculating both sin and cos, you're rotating something.  If the phase advances by the same delta each time, there's a really cool recurrence relation that gets you out of doing any sin/cos calls at all.

From Numerical Recipes 2nd ed:



This trick will be obvious to most Math Guys, I'm sure, but I was pretty glad to stumble across it when I needed it.
 
The following users thanked this post: newbrain, RoGeorge, RAPo

Offline Picuino

  • Super Contributor
  • ***
  • Posts: 1032
  • Country: es
    • Picuino web
Needs double precision support, and the argument is in radians, not in cycles, so that basically suggests replacing apples with oranges.

Ok, another orange:
 - Simple precision
 - Only 7 multiplications, no division
 - Only 7 coefficients (float numbers of simple precision)
 - Input argument range from -1 to +1

Code: [Select]
import numpy as np
import matplotlib.pyplot as plt


def cos_poly(x):
   p0 = +1.61608342e-03
   p1 = -2.54255600e-02
   p2 = +2.35104787e-01
   p3 = -1.33519736e+00
   p4 = +4.05870398e+00
   p5 = -4.93480191e+00
   p6 = +1.00000000e+00
   xx = x * x
   return ((((((p0 * xx + p1)*xx + p2)*xx + p3)*xx + p4)*xx + p5)*xx + p6)


def main():
   xx = np.linspace(-1, 1, 500)
   yy = cos_true(xx) - cos_poly(xx)

   fig, ax = plt.subplots()
   ax.plot(xx, yy)
   ax.grid(True)
   plt.show()


def cos_true(x):
   return np.cos(x * np.pi)

   
main()

Attached: Error (+- 2 * 1e-8).
« Last Edit: June 16, 2024, 09:53:21 pm by Picuino »
 

Offline nimish

  • Regular Contributor
  • *
  • Posts: 185
  • Country: us
Neat, here are some with better tradeoffs: https://gist.github.com/publik-void/067f7f2fef32dbe5c27d6e215f824c91

The rel error is 10^4 better than Picuino's for cos, given the same order (12).

Estrin's scheme is better than Horner's for OOO processors. And these should be coded using FMA intrinsics.


Frequently when you're calculating both sin and cos, you're rotating something.  If the phase advances by the same delta each time, there's a really cool recurrence relation that gets you out of doing any sin/cos calls at all.

From Numerical Recipes 2nd ed:

This trick will be obvious to most Math Guys, I'm sure, but I was pretty glad to stumble across it when I needed it.

I want to say this is basically software CORDIC since you're basically just applying the same Lie algebraic infinitesimal rotations, though CORDIC uses binary arctan to allow strength-reducing the operations.
« Last Edit: June 17, 2024, 03:08:35 am by nimish »
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Ok, another orange:
Needs fixing!  ;)

Python uses double precision, not single-precision.  Also, you implemented period 2, not period 1.  (Use np.cos(2*x*np.pi).)

Writing the fixed version in C (or some other language that supports single-precision without resorting to double precision), you'd see that it diverges quite soon, with significant deviations near 0.25, and it utterly fails above ±0.75 or so.

The idea is that we want to be able to use usinf(x - (int)x) or ucosf(x - (int)x) or similar expressions for any x within -16777216..16777216 or so, although the larger you go, the fewer phase/angle bits there are.  You have zero angle bits left at ±16777216, one at ±±8388608, but the point is that you can use say [-4, +4] or whatever fits your needs, and Just Not Worry About It.



Anyway, that one is based on the bog-standard polynomial series expansion for cosine:
$$\operatorname{ucos}(x) = \cos(2 \pi x) = \sum_{k=0}^\infty (-1)^k \frac{(2 \pi)^{2 k}}{(2 k)!} x^{2 k} = \sum_{k=0}^\infty C_k x^{2 k}$$
where
$$\begin{aligned}
C_0 &= 1 \\
C_1 &= -19.73920880217871 \\
C_2 &= 64.93939402266828 \\
C_3 &= -85.45681720669372 \\
C_4 &= 60.24464137187665 \\
C_5 &= -26.42625678337438 \\
C_6 &= 7.903536371318467 \\
C_7 &= -1.714390711088671 \\
C_8 &= 0.282005968455791 \\
C_9 &= -0.03638284114254565 \\
C_k &= (-1)^k \frac{(2 \pi)^{2 k}}{(2 k)!} \\
C_{k+1} &= - C_k \frac{4 \pi^2}{(2 k + 1)(2 k + 2)} \\
\end{aligned}$$
and so on.

Nothing wrong in using that, though; it is a well-known formulation, and useful expansion around zero.

However, it is nowhere near as good as you asserted.  You simply sampled it on a regular grid, and assumed that covers all the errors.  Not so; exhaustive tests are needed when rounding errors and such cause a small range or high errors.

This is why my tests are exhaustive: I literally test every single non-subnormal float between 0 and 1.  Subnormal floats are approximately in range \$-2^{126} \dots +2^{126}\$, and so close to zero we can safely consider them zeroes.  (Many embedded C math libraries do exactly that, with a note somewhere saying subnormal numbers are not supported or are rounded to zero.)  They are definitely smaller than any machine epsilon I tend to care about.

(And yes, I sometimes use nextafterf() to tweak one coefficient by one ULP at a time, to see if there is a nearby set of coefficients that yields smaller errors, simply because it changes the set of rounding coincidents that happen.)

If you evaluate the corrected first-seven-terms approximation it at x = 0.23118135, you'll find that it is 14 ULPs low (relative error 14/16777216 ≃ 0.0000008).  It's even more off at x = 0.24430753.

The corresponding sine series is
$$\operatorname{usin}(x) = \sin(2 \pi x) = \sum_{k=0}^\infty (-1)^k \frac{(2 \pi)^{2 k + 1}}{(2 k + 1)!} x^{2 k + 1} = \sum_{k=0}^\infty S_k x^{2 k + 1}$$
where
$$\begin{aligned}
S_0 &= 6.283185307179586 = 2 \pi \\
S_1 &= -41.34170224039975 \\
S_2 &= 81.60524927607504 \\
S_3 &= -76.70585975306136 \\
S_4 &= 42.05869394489765 \\
S_5 &= -15.09464257682298 \\
S_6 &= 3.81995258484828 \\
S_7 &= -0.7181223017785002 \\
S_8 &= 0.1042291622081397 \\
S_9 &= -0.01203158594212062 \\
S_k &= (-1)^k \frac{(2 \pi)^{2 k + 1}}{(2 k + 1)!} \\
S_{k+1} &= -S_k \frac{4 \pi^2}{(2 k + 2)(2 k + 3)} \\
\end{aligned}$$
and it fares slightly better, but not much.

I really should rewrite my crude bench program and post it, I guess.  I really should vectorize it for AVX/SSE, as that cuts the full check duration to one quarter.  And clean up the output to e.g. a nicely formatted HTML table.
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Frequently when you're calculating both sin and cos, you're rotating something.  If the phase advances by the same delta each time, there's a really cool recurrence relation that gets you out of doing any sin/cos calls at all.
For me, the typical use case is constructing an orientation quaternion/bivector (the math is the same whichever you pick) via
$$\mathbf{q} = \Biggl( \cos\left(\frac{\varphi}{2}\right), ~ ~ \frac{x}{n} \sin\left(\frac{\varphi}{2}\right), ~ ~ \frac{y}{n} \sin\left(\frac{\varphi}{2}\right), ~ ~ \frac{z}{n} \sin\left(\frac{\varphi}{2}\right) \Biggr), \quad n = \sqrt{x^2 + y^2 + z^2}$$
for rotating 3D data by angle \$\varphi\$ around axis \$(x, y, z)\$.  Unlike rotation matrices, one can safely chain orientation quaternions/bivectors, because there is no loss of orthogonality, or harm if rounding errors compound a bit.

Aside from testing and verification of code and algorithms – which I like to do thoroughly – I don't even recall when I needed a sequence of sine or cosine values, instead of semi-random specific values.

(I utterly despise Euler and Tait-Bryan angles.  Not only are they vague – there are literally dozens of them, and nobody bothers to specify the exact one they are referring to – but they all suffer from gimbal lock.  I'll do orientation quaternions/versors/bivectors any day instead, even if Hamilton product is a bit cumbersome and conversion to and from rotation matrices is a bit fiddly.)
« Last Edit: June 17, 2024, 04:45:57 am by Nominal Animal »
 

Online peter-h

  • Super Contributor
  • ***
  • Posts: 4003
  • Country: gb
  • Doing electronics since the 1960s...
Quote
verified for all 1,056,964,609 finite normal values between 0 and 1, i.e. one full period

Impressive :)
Z80 Z180 Z280 Z8 S8 8031 8051 H8/300 H8/500 80x86 90S1200 32F417
 
The following users thanked this post: RAPo

Offline Picuino

  • Super Contributor
  • ***
  • Posts: 1032
  • Country: es
    • Picuino web
I have a series that approximates the sine function with only 3 coefficients and 4 multiplications, with an error of 6e-7. It does not take full advantage of simple precision arithmetic, but it is very simple and fast.
I will make a program to test the errors in the series and post it.


EDIT:
For now I will post the python program to calculate the coefficients.
Code: [Select]
import numpy as np
import matplotlib.pyplot as plt


def main():
    nodes = 7
    x_nodes = cheby_nodes(-0.125, 0.125, nodes)
    y_nodes = sin(x_nodes)

    pol = np.polyfit(x_nodes, y_nodes, nodes-1)
    for i in range(0, nodes, 2):
        pol[i] = 0
    for i in range(1, nodes, 2):
        print(f"   p{i} = {pol[i]:+11.8e}")

    xx = np.linspace(-0.125, 0.125, 500)
    yy = np.polyval(pol, xx) - sin(xx)
   
    plt.plot(xx, yy)
    plt.ylabel('sin(x)')
    plt.show()


def sin(x):
    return np.sin(x * 2 * np.pi)


def cheby_nodes(a, b, nodes):
    i = np.array(range(nodes))
    x = np.cos((2 * i + 1) * np.pi / (2 * nodes))
    return 0.5 * (b - a) * x + 0.5 * (b + a)


main()

I am going to calculate the sine and cosine from the trigonometric properties that allow to calculate them having only the angles from 0 to 45º.


Code: [Select]
   p1 = +7.95301472e+01
   p3 = -4.13255427e+01
   p5 = +6.28315378e+00
« Last Edit: June 17, 2024, 08:47:29 am by Picuino »
 

Offline Picuino

  • Super Contributor
  • ***
  • Posts: 1032
  • Country: es
    • Picuino web
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #10 on: June 17, 2024, 08:53:53 am »
Cosine with error of 4e-8:

Code: [Select]
import numpy as np
import matplotlib.pyplot as plt


def main():
    nodes = 7
    x_nodes = cheby_nodes(-0.125, 0.125, nodes)
    y_nodes = cos(x_nodes)

    pol = np.polyfit(x_nodes, y_nodes, nodes-1)
    for i in range(1, nodes, 2):
        pol[i] = 0
    for i in range(0, nodes, 2):
        print(f"   p{i} = {pol[i]:+11.8e}")

    xx = np.linspace(-0.125, 0.125, 500)
    yy = np.polyval(pol, xx) - cos(xx)
   
    plt.plot(xx, yy)
    plt.ylabel('cos(x)')
    plt.show()


def cos(x):
    return np.cos(x * 2 * np.pi)

def sin(x):
    return np.sin(x * 2 * np.pi)


def cheby_nodes(a, b, nodes):
    i = np.array(range(nodes))
    x = np.cos((2 * i + 1) * np.pi / (2 * nodes))
    return 0.5 * (b - a) * x + 0.5 * (b + a)


main()


Coefficients:
Code: [Select]
   p0 = -8.38235436e+01
   p2 = +6.49266669e+01
   p4 = -1.97391840e+01
   p6 = +1.00000000e+00
 

Offline Picuino

  • Super Contributor
  • ***
  • Posts: 1032
  • Country: es
    • Picuino web
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #11 on: June 17, 2024, 10:55:32 am »
C file with implementation of the function sine(x * 2 * pi) between -1 and +1.

Code: [Select]
#include <math.h>
#include <stdio.h>

float sin_pol(float x);
float sin_pol_r(float x);
float cos_pol_r(float x);

main(void) {
   double x, y1, y2;

   for(x = -1.0; x <= 1.0; x += 1.0/100) {
      y1 = sin_pol(x);
      y2 = sin(x * M_PI * 2);
      printf("x=%+11.8e  sin(x)=%+11.8e  Error=%+11.8e\n", x, y1, y1-y2);
   }
}


float sin_pol(float x) {
    signed char sign = 0;
    if (x < 0) {
        sign = -1;
        x = -x;
    }
    if (x > 0.5) {
        x = x - 0.5;
        if (sign == 0)
            sign = -1;
        else
            sign = 0;
    }
    if (x > 0.25) {
        x = 0.5 - x;
    }
    if (x > 0.125) {
        if (sign == 0)
            return cos_pol_r(0.25 - x);
        else
            return -cos_pol_r(0.25 - x);
    }
    else {
        if (sign == 0)
            return sin_pol_r(x);
        else
            return -sin_pol_r(x);
    }
}


float cos_pol_r(float x) {
   float p0 = -8.38235436e+01;
   float p2 = +6.49266669e+01;
   float p4 = -1.97391840e+01;
   float p6 = +1.00000000e+00;
   float xx = x * x;
   return ((p0 * xx + p2) * xx + p4) * xx + p6;
}


float sin_pol_r(float x) {
   float p1 = +7.95301472e+01;
   float p3 = -4.13255427e+01;
   float p5 = +6.28315378e+00;
   float xx = x * x;
   return ((p1 * xx + p3) * xx + p5) * x;
}



Output (sin values and error):
Code: [Select]
x=-1.00000000e+000  sin(x)=+0.00000000e+000  Error=-2.44921271e-016
x=-9.90000000e-001  sin(x)=+6.27901627e-002  Error=-3.56813253e-007
x=-9.80000000e-001  sin(x)=+1.25332611e-001  Error=-6.22085800e-007
x=-9.70000000e-001  sin(x)=+1.87380587e-001  Error=-7.27921819e-007
x=-9.60000000e-001  sin(x)=+2.48689600e-001  Error=-2.86958976e-007
x=-9.50000000e-001  sin(x)=+3.09016932e-001  Error=-6.20652434e-008
x=-9.40000000e-001  sin(x)=+3.68124780e-001  Error=+2.27642126e-007
x=-9.30000000e-001  sin(x)=+4.25779745e-001  Error=+4.53413709e-007
x=-9.20000000e-001  sin(x)=+4.81754155e-001  Error=+4.80928170e-007
x=-9.10000000e-001  sin(x)=+5.35827018e-001  Error=+2.22972283e-007
x=-9.00000000e-001  sin(x)=+5.87785283e-001  Error=+3.02962440e-008
x=-8.90000000e-001  sin(x)=+6.37423556e-001  Error=-4.33577147e-007
x=-8.80000000e-001  sin(x)=+6.84546932e-001  Error=-1.73519201e-007
x=-8.70000000e-001  sin(x)=+7.28968644e-001  Error=+1.67323476e-008
x=-8.60000000e-001  sin(x)=+7.70513246e-001  Error=+2.78052437e-009
x=-8.50000000e-001  sin(x)=+8.09016927e-001  Error=-6.74160083e-008
x=-8.40000000e-001  sin(x)=+8.44327989e-001  Error=+6.32185192e-008
x=-8.30000000e-001  sin(x)=+8.76306696e-001  Error=+1.55469245e-008
x=-8.20000000e-001  sin(x)=+9.04827045e-001  Error=-7.06649284e-009
x=-8.10000000e-001  sin(x)=+9.29776475e-001  Error=-1.07940462e-008
x=-8.00000000e-001  sin(x)=+9.51056502e-001  Error=-1.44570964e-008
x=-7.90000000e-001  sin(x)=+9.68583142e-001  Error=-1.90036964e-008
x=-7.80000000e-001  sin(x)=+9.82287298e-001  Error=+4.71069131e-008
x=-7.70000000e-001  sin(x)=+9.92114725e-001  Error=+2.33133507e-008
x=-7.60000000e-001  sin(x)=+9.98026735e-001  Error=+6.19351159e-009
x=-7.50000000e-001  sin(x)=+1.00000000e+000  Error=+0.00000000e+000
x=-7.40000000e-001  sin(x)=+9.98026735e-001  Error=+6.19351181e-009
x=-7.30000000e-001  sin(x)=+9.92114725e-001  Error=+2.33133511e-008
x=-7.20000000e-001  sin(x)=+9.82287298e-001  Error=+4.71069137e-008
x=-7.10000000e-001  sin(x)=+9.68583142e-001  Error=-1.90036956e-008
x=-7.00000000e-001  sin(x)=+9.51056502e-001  Error=-1.44570955e-008
x=-6.90000000e-001  sin(x)=+9.29776475e-001  Error=-1.07940451e-008
x=-6.80000000e-001  sin(x)=+9.04827045e-001  Error=-7.06649150e-009
x=-6.70000000e-001  sin(x)=+8.76306696e-001  Error=+1.55469261e-008
x=-6.60000000e-001  sin(x)=+8.44327989e-001  Error=+6.32185209e-008
x=-6.50000000e-001  sin(x)=+8.09016927e-001  Error=-6.74160066e-008
x=-6.40000000e-001  sin(x)=+7.70513246e-001  Error=+2.78052625e-009
x=-6.30000000e-001  sin(x)=+7.28968644e-001  Error=+1.67323496e-008
x=-6.20000000e-001  sin(x)=+6.84546932e-001  Error=-1.73519199e-007
x=-6.10000000e-001  sin(x)=+6.37423556e-001  Error=-4.33577145e-007
x=-6.00000000e-001  sin(x)=+5.87785283e-001  Error=+3.02962467e-008
x=-5.90000000e-001  sin(x)=+5.35827018e-001  Error=+2.22972285e-007
x=-5.80000000e-001  sin(x)=+4.81754155e-001  Error=+4.80928173e-007
x=-5.70000000e-001  sin(x)=+4.25779745e-001  Error=+4.53413712e-007
x=-5.60000000e-001  sin(x)=+3.68124780e-001  Error=+2.27642129e-007
x=-5.50000000e-001  sin(x)=+3.09016932e-001  Error=-6.20652406e-008
x=-5.40000000e-001  sin(x)=+2.48689600e-001  Error=-2.86958973e-007
x=-5.30000000e-001  sin(x)=+1.87380587e-001  Error=-7.27921816e-007
x=-5.20000000e-001  sin(x)=+1.25332611e-001  Error=-6.22085797e-007
x=-5.10000000e-001  sin(x)=+6.27901627e-002  Error=-3.56813251e-007
x=-5.00000000e-001  sin(x)=-0.00000000e+000  Error=+2.78699589e-015
x=-4.90000000e-001  sin(x)=-6.27901629e-002  Error=+3.56667969e-007
x=-4.80000000e-001  sin(x)=-1.25332803e-001  Error=+4.31030204e-007
x=-4.70000000e-001  sin(x)=-1.87380776e-001  Error=+5.38776440e-007
x=-4.60000000e-001  sin(x)=-2.48689413e-001  Error=+4.74094179e-007
x=-4.50000000e-001  sin(x)=-3.09016943e-001  Error=+5.13970452e-008
x=-4.40000000e-001  sin(x)=-3.68124783e-001  Error=-2.30354412e-007
x=-4.30000000e-001  sin(x)=-4.25779730e-001  Error=-4.38516483e-007
x=-4.20000000e-001  sin(x)=-4.81754333e-001  Error=-6.58679120e-007
x=-4.10000000e-001  sin(x)=-5.35827160e-001  Error=-3.64902592e-007
x=-4.00000000e-001  sin(x)=-5.87785125e-001  Error=+1.27513729e-007
x=-3.90000000e-001  sin(x)=-6.37423575e-001  Error=+4.14824223e-007
x=-3.80000000e-001  sin(x)=-6.84546947e-001  Error=+1.58449443e-007
x=-3.70000000e-001  sin(x)=-7.28968620e-001  Error=+7.12112092e-009
x=-3.60000000e-001  sin(x)=-7.70513237e-001  Error=+6.25311691e-009
x=-3.50000000e-001  sin(x)=-8.09017003e-001  Error=-8.20760049e-009
x=-3.40000000e-001  sin(x)=-8.44327927e-001  Error=-1.13372511e-009
x=-3.30000000e-001  sin(x)=-8.76306593e-001  Error=+8.66257441e-008
x=-3.20000000e-001  sin(x)=-9.04827058e-001  Error=-5.84925586e-009
x=-3.10000000e-001  sin(x)=-9.29776490e-001  Error=-3.84639676e-009
x=-3.00000000e-001  sin(x)=-9.51056480e-001  Error=+3.58874400e-008
x=-2.90000000e-001  sin(x)=-9.68583167e-001  Error=-5.47064161e-009
x=-2.80000000e-001  sin(x)=-9.82287288e-001  Error=-3.69834078e-008
x=-2.70000000e-001  sin(x)=-9.92114723e-001  Error=-2.14142509e-008
x=-2.60000000e-001  sin(x)=-9.98026729e-001  Error=-2.01794137e-010
x=-2.50000000e-001  sin(x)=-1.00000000e+000  Error=+0.00000000e+000
x=-2.40000000e-001  sin(x)=-9.98026729e-001  Error=-2.01794581e-010
x=-2.30000000e-001  sin(x)=-9.92114723e-001  Error=-2.14142519e-008
x=-2.20000000e-001  sin(x)=-9.82287288e-001  Error=-3.69834093e-008
x=-2.10000000e-001  sin(x)=-9.68583167e-001  Error=-5.47064360e-009
x=-2.00000000e-001  sin(x)=-9.51056540e-001  Error=-2.37172074e-008
x=-1.90000000e-001  sin(x)=-9.29776490e-001  Error=-3.84639998e-009
x=-1.80000000e-001  sin(x)=-9.04827058e-001  Error=-5.84925952e-009
x=-1.70000000e-001  sin(x)=-8.76306653e-001  Error=+2.70210951e-008
x=-1.60000000e-001  sin(x)=-8.44327927e-001  Error=-1.13372967e-009
x=-1.50000000e-001  sin(x)=-8.09017003e-001  Error=-8.20760548e-009
x=-1.40000000e-001  sin(x)=-7.70513296e-001  Error=-5.33515332e-008
x=-1.30000000e-001  sin(x)=-7.28968620e-001  Error=+7.12111525e-009
x=-1.20000000e-001  sin(x)=-6.84546888e-001  Error=+2.18054082e-007
x=-1.10000000e-001  sin(x)=-6.37423456e-001  Error=+5.34033507e-007
x=-1.00000000e-001  sin(x)=-5.87785184e-001  Error=+6.79090769e-008
x=-9.00000000e-002  sin(x)=-5.35827160e-001  Error=-3.64902599e-007
x=-8.00000000e-002  sin(x)=-4.81754243e-001  Error=-5.69272160e-007
x=-7.00000000e-002  sin(x)=-4.25779790e-001  Error=-4.98121135e-007
x=-6.00000000e-002  sin(x)=-3.68124753e-001  Error=-2.00552097e-007
x=-5.00000000e-002  sin(x)=-3.09016854e-001  Error=+1.40804005e-007
x=-4.00000000e-002  sin(x)=-2.48689458e-001  Error=+4.29390688e-007
x=-3.00000000e-002  sin(x)=-1.87380761e-001  Error=+5.53677593e-007
x=-2.00000000e-002  sin(x)=-1.25332728e-001  Error=+5.05536002e-007
x=-1.00000000e-002  sin(x)=-6.27902225e-002  Error=+2.97063317e-007
x=+7.52869989e-016  sin(x)=+4.73039809e-015  Error=-2.35621255e-020
x=+1.00000000e-002  sin(x)=+6.27902211e-002  Error=-2.98413454e-007
x=+2.00000000e-002  sin(x)=+1.25332728e-001  Error=-5.05985060e-007
x=+3.00000000e-002  sin(x)=+1.87380759e-001  Error=-5.55495390e-007
x=+4.00000000e-002  sin(x)=+2.48689464e-001  Error=-4.23056269e-007
x=+5.00000000e-002  sin(x)=+3.09016865e-001  Error=-1.29230045e-007
x=+6.00000000e-002  sin(x)=+3.68124758e-001  Error=+2.05355240e-007
x=+7.00000000e-002  sin(x)=+4.25779788e-001  Error=+4.96089063e-007
x=+8.00000000e-002  sin(x)=+4.81754237e-001  Error=+5.63154022e-007
x=+9.00000000e-002  sin(x)=+5.35827179e-001  Error=+3.83625059e-007
x=+1.00000000e-001  sin(x)=+5.87785166e-001  Error=-8.62815578e-008
x=+1.10000000e-001  sin(x)=+6.37423482e-001  Error=-5.07816449e-007
x=+1.20000000e-001  sin(x)=+6.84546894e-001  Error=-2.11658960e-007
x=+1.30000000e-001  sin(x)=+7.28968644e-001  Error=+1.67323451e-008
x=+1.40000000e-001  sin(x)=+7.70513296e-001  Error=+5.36440776e-008
x=+1.50000000e-001  sin(x)=+8.09017030e-001  Error=+3.57696711e-008
x=+1.60000000e-001  sin(x)=+8.44327902e-001  Error=-2.38781680e-008
x=+1.70000000e-001  sin(x)=+8.76306652e-001  Error=-2.85009620e-008
x=+1.80000000e-001  sin(x)=+9.04827045e-001  Error=-7.06649450e-009
x=+1.90000000e-001  sin(x)=+9.29776475e-001  Error=-1.07940474e-008
x=+2.00000000e-001  sin(x)=+9.51056529e-001  Error=+1.26669111e-008
x=+2.10000000e-001  sin(x)=+9.68583165e-001  Error=+3.73461528e-009
x=+2.20000000e-001  sin(x)=+9.82287264e-001  Error=+1.28414841e-008
x=+2.30000000e-001  sin(x)=+9.92114713e-001  Error=+1.12808685e-008
x=+2.40000000e-001  sin(x)=+9.98026729e-001  Error=+3.08904458e-010
x=+2.50000000e-001  sin(x)=+1.00000000e+000  Error=+0.00000000e+000
x=+2.60000000e-001  sin(x)=+9.98026735e-001  Error=+6.19351204e-009
x=+2.70000000e-001  sin(x)=+9.92114701e-001  Error=-1.78636883e-010
x=+2.80000000e-001  sin(x)=+9.82287264e-001  Error=+1.28414860e-008
x=+2.90000000e-001  sin(x)=+9.68583188e-001  Error=+2.64729305e-008
x=+3.00000000e-001  sin(x)=+9.51056502e-001  Error=-1.44570944e-008
x=+3.10000000e-001  sin(x)=+9.29776475e-001  Error=-1.07940437e-008
x=+3.20000000e-001  sin(x)=+9.04827045e-001  Error=-7.06649017e-009
x=+3.30000000e-001  sin(x)=+8.76306607e-001  Error=-7.25488413e-008
x=+3.40000000e-001  sin(x)=+8.44327902e-001  Error=-2.38781624e-008
x=+3.50000000e-001  sin(x)=+8.09017030e-001  Error=+3.57696771e-008
x=+3.60000000e-001  sin(x)=+7.70513246e-001  Error=+2.78052836e-009
x=+3.70000000e-001  sin(x)=+7.28968644e-001  Error=+1.67323522e-008
x=+3.80000000e-001  sin(x)=+6.84546932e-001  Error=-1.73519196e-007
x=+3.90000000e-001  sin(x)=+6.37423556e-001  Error=-4.33577142e-007
x=+4.00000000e-001  sin(x)=+5.87785130e-001  Error=-1.22673747e-007
x=+4.10000000e-001  sin(x)=+5.35827179e-001  Error=+3.83625068e-007
x=+4.20000000e-001  sin(x)=+4.81754319e-001  Error=+6.45379884e-007
x=+4.30000000e-001  sin(x)=+4.25779745e-001  Error=+4.53413715e-007
x=+4.40000000e-001  sin(x)=+3.68124780e-001  Error=+2.27642132e-007
x=+4.50000000e-001  sin(x)=+3.09016932e-001  Error=-6.20652374e-008
x=+4.60000000e-001  sin(x)=+2.48689419e-001  Error=-4.68422023e-007
x=+4.70000000e-001  sin(x)=+1.87380771e-001  Error=-5.43933164e-007
x=+4.80000000e-001  sin(x)=+1.25332797e-001  Error=-4.36310193e-007
x=+4.90000000e-001  sin(x)=+6.27901627e-002  Error=-3.56813247e-007
x=+5.00000000e-001  sin(x)=+0.00000000e+000  Error=+6.09478830e-015
x=+5.10000000e-001  sin(x)=-6.27901629e-002  Error=+3.56667972e-007
x=+5.20000000e-001  sin(x)=-1.25332609e-001  Error=+6.24745303e-007
x=+5.30000000e-001  sin(x)=-1.87380582e-001  Error=+7.32491538e-007
x=+5.40000000e-001  sin(x)=-2.48689607e-001  Error=+2.80379087e-007
x=+5.50000000e-001  sin(x)=-3.09016943e-001  Error=+5.13970483e-008
x=+5.60000000e-001  sin(x)=-3.68124783e-001  Error=-2.30354409e-007
x=+5.70000000e-001  sin(x)=-4.25779730e-001  Error=-4.38516480e-007
x=+5.80000000e-001  sin(x)=-4.81754154e-001  Error=-4.79865182e-007
x=+5.90000000e-001  sin(x)=-5.35827041e-001  Error=-2.45693300e-007
x=+6.00000000e-001  sin(x)=-5.87785304e-001  Error=-5.13002035e-008
x=+6.10000000e-001  sin(x)=-6.37423575e-001  Error=+4.14824226e-007
x=+6.20000000e-001  sin(x)=-6.84546947e-001  Error=+1.58449445e-007
x=+6.30000000e-001  sin(x)=-7.28968620e-001  Error=+7.12112325e-009
x=+6.40000000e-001  sin(x)=-7.70513237e-001  Error=+6.25311936e-009
x=+6.50000000e-001  sin(x)=-8.09016943e-001  Error=+5.13970463e-008
x=+6.60000000e-001  sin(x)=-8.44327986e-001  Error=-6.07383683e-008
x=+6.70000000e-001  sin(x)=-8.76306713e-001  Error=-3.25835439e-008
x=+6.80000000e-001  sin(x)=-9.04827058e-001  Error=-5.84925475e-009
x=+6.90000000e-001  sin(x)=-9.29776490e-001  Error=-3.84639554e-009
x=+7.00000000e-001  sin(x)=-9.51056480e-001  Error=+3.58874409e-008
x=+7.10000000e-001  sin(x)=-9.68583167e-001  Error=-5.47064072e-009
x=+7.20000000e-001  sin(x)=-9.82287288e-001  Error=-3.69834071e-008
x=+7.30000000e-001  sin(x)=-9.92114723e-001  Error=-2.14142505e-008
x=+7.40000000e-001  sin(x)=-9.98026729e-001  Error=-2.01793915e-010
x=+7.50000000e-001  sin(x)=-1.00000000e+000  Error=+0.00000000e+000
x=+7.60000000e-001  sin(x)=-9.98026729e-001  Error=-2.01794803e-010
x=+7.70000000e-001  sin(x)=-9.92114723e-001  Error=-2.14142524e-008
x=+7.80000000e-001  sin(x)=-9.82287288e-001  Error=-3.69834099e-008
x=+7.90000000e-001  sin(x)=-9.68583167e-001  Error=-5.47064438e-009
x=+8.00000000e-001  sin(x)=-9.51056480e-001  Error=+3.58874364e-008
x=+8.10000000e-001  sin(x)=-9.29776490e-001  Error=-3.84640098e-009
x=+8.20000000e-001  sin(x)=-9.04827058e-001  Error=-5.84926096e-009
x=+8.30000000e-001  sin(x)=-8.76306713e-001  Error=-3.25835510e-008
x=+8.40000000e-001  sin(x)=-8.44327986e-001  Error=-6.07383762e-008
x=+8.50000000e-001  sin(x)=-8.09016943e-001  Error=+5.13970376e-008
x=+8.60000000e-001  sin(x)=-7.70513237e-001  Error=+6.25310936e-009
x=+8.70000000e-001  sin(x)=-7.28968620e-001  Error=+7.12111314e-009
x=+8.80000000e-001  sin(x)=-6.84546947e-001  Error=+1.58449434e-007
x=+8.90000000e-001  sin(x)=-6.37423575e-001  Error=+4.14824214e-007
x=+9.00000000e-001  sin(x)=-5.87785304e-001  Error=-5.13002157e-008
x=+9.10000000e-001  sin(x)=-5.35827041e-001  Error=-2.45693313e-007
x=+9.20000000e-001  sin(x)=-4.81754154e-001  Error=-4.79865196e-007
x=+9.30000000e-001  sin(x)=-4.25779730e-001  Error=-4.38516493e-007
x=+9.40000000e-001  sin(x)=-3.68124783e-001  Error=-2.30354423e-007
x=+9.50000000e-001  sin(x)=-3.09016943e-001  Error=+5.13970339e-008
x=+9.60000000e-001  sin(x)=-2.48689607e-001  Error=+2.80379073e-007
x=+9.70000000e-001  sin(x)=-1.87380582e-001  Error=+7.32491523e-007
x=+9.80000000e-001  sin(x)=-1.25332609e-001  Error=+6.24745288e-007
x=+9.90000000e-001  sin(x)=-6.27901629e-002  Error=+3.56667957e-007

« Last Edit: June 17, 2024, 11:00:27 am by Picuino »
 

Offline mark03

  • Frequent Contributor
  • **
  • Posts: 730
  • Country: us
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #12 on: June 17, 2024, 10:33:42 pm »
I don't even recall when I needed a sequence of sine or cosine values, instead of semi-random specific values.
Everyone's different.  Most of my work is signal-processing centric, so I most often deal with sequences of sin/cos, and "semi-random specific values" are the weird outlier.  Of course, there is a large body of literature on this... we call it Direct Digital Synthesis (DDS).  Performance is evaluated a bit differently, though.  Instead of absolute errors in the time domain we usually want to know the amplitude of the spurs relative to the intended sinusoid, for a given computational complexity.
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #13 on: June 18, 2024, 07:32:39 am »
Quote
verified for all 1,056,964,609 finite normal values between 0 and 1, i.e. one full period
Impressive :)
No, the right word is useful.  It only takes a minute on my ye olde Intel Core i5 7200U laptop, even without vectorization.  (And if we count 0 and 1 both, then it is 1,056,964,610 tests.)

IEEE 754 Binary32 aka float is what microcontrollers with hardware support for floats implement.  float has
  • Two zeroes: -0.0 and +0.0.  These compare equal.
  • 16,777,214 subnormal values, 8,388,607 positive and 8,388,607 negative.  Their magnitude is less than 2-126 ≃ 1.1754944e-38, so they are all essentially zero.  Many math libraries do not support subnormals, and instead treat them as zero.
  • Two infinities: -INFINITY and +INFINITY.
  • 16,777,214 NaNs, 8,388,607 positive and 8,388,607 negative.  These compare inequal to everything, even themselves.  If they are used in an arithmetic operation, the result will also be NaN.
  • 4,261,412,864 normal values, 2,130,706,432 positive and 2,130,706,432 negative.  Zero is not counted as a normal value.
    1,056,964,608 of these are less than 1 in magnitude, and 1,073,741,823 are greater than 1 in magnitude.
Because the sign is a separate bit in float (that you can access with signbit(x) in C), you only need to test positive zeros and normals.  Thus, there are only 2,130,706,433 unique float values to test.  Or, if you are only interested in 0.0f <= x <= 1.0f, as is the case here, 1,056,964,610 unique values.

On architectures with IEEE 754 Binary32 support with float byte order the same as uint32_t byte order, the storage representation of positive zero is 0x00000000 (and negative zero is 0x80000000), and the storage representation of positive normals ranges from 0x00800000 to 0x7F7FFFFF.  +1.0f is represented as 0x3f800000.

If you construct a function that checks your approximation for some float x, and records any deviation from the correct value, you can use a very simple loop:
    for (float x = 0.0f; isfinite(x); x = nexttowardf(x, 1e64)) check(x);
or use the storage representation:
    union {
        uint32_t  u32;
        float   f;
    } x;
    check(0.0f);
    for (x.u32 = 0x00800000; x.u32 < 0x7F800000; x++) check(u.f);

If you are only interested in finite normals up to 1.0f, then use
    for (float x = 0.0f; x <= 1.0f; x = nextafterf(x, +HUGE_VALF)) check(x);
or use the storage representation:
    union {
        uint32_t  u32;
        float   f;
    } x;
    check(0.0f);
    for (x.u32 = 0x00800000; x.u32 <= 0x3F800000; x++) check(u.f);

While the above check the values in increasing order, the order obviously doesn't matter.

If you are interested in finding out if replacing a float division by a constant with a float multiplication with a constant having the reciprocal value, you can use the above to check.  If you need it to yield the exact same answer for an integer divisor, you can use the Markstein approach to turn the division into one multiplication and two fused multiply-adds (fmaf); my pseudocode there tells you how often the given integer divisor will yield a value different to the division, across all normal positive float dividends.

To leverage vector extensions using GCC (or Clang), you only need to declare a suitable type, say
    typedef  float  float4  __attribute__((vector_size (4 * sizeof (float))));
and basic arithmetic operators –– +, -, *, /, negation, ^ (XOR), | (OR), & (AND), ~ (binary NOT), % (modulus or remainder) -– will do the operation element-wise.  You can also access such vector as an array.  If you use a scalar for the other operand for +, -, *, /, ^, |, &, or %, GCC will automatically convert the scalar to a vector with all components the same value.

If you dynamically allocate vector types, you should use aligned_alloc(_Alignof (vectortype, (size_t)count * sizeof (vectortype)), because malloc()/calloc()/realloc() may not enforce sufficient alignment for hardware-supported vector types.  It is annoying, yes, but the compiler developers insist this is according to the C standards.  (The same also affects libquadmath.)

Conversion from a vector type to another vector type newvectortype with the same number of components is done using __builtin_convertvector(vector, newvectortype).  It behaves exactly like casting each member separately to newvectortype.

This works across all architectures.  If the target does not support such a vector type, GCC (and Clang) combine it from smaller vectors, or uses the plain scalar type, and duplicates the operations as many times as necessary.  It does not always generate optimal code then (especially for sequences of operations that need more than half the available vector registers), but it will work.

Unfortunately, for vector verification of the results, like comparing two vectors component-wise, you do need to use architecture-specific vector extension built-ins.  It is best to check what operations are available on the different architectures you use for verification, and hide the details behind static inline functions.  For each function, start with a version that does the checking component by component, so you can easily verify your vectorized and component-by-component checking functions always produce the same results.  I like to keep a compile-time switch to select between vectorized and non-vectorized checking for that reason.

Vectorized checking of piecewise polynomial functions are complicated and funky, so I suggest you don't bother with those until you have a firm grasp of to handle single operation multiple-data vectors in general.
« Last Edit: June 18, 2024, 08:03:55 am by Nominal Animal »
 
The following users thanked this post: glenenglish

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #14 on: June 18, 2024, 07:49:25 am »
I don't even recall when I needed a sequence of sine or cosine values, instead of semi-random specific values.
Everyone's different.
Of course!  That's exactly why I started with "For me, the typical use case is".  As the OP, I thought that describing my typical use cases, and why I started a thread about calculating individual random arguments, instead of sequences of arguments, would be informative –– and help others decide whether it is relevant to them or their problem domain ––; that's all.

Most of my work is signal-processing centric, so I most often deal with sequences of sin/cos, and "semi-random specific values" are the weird outlier.  Of course, there is a large body of literature on this... we call it Direct Digital Synthesis (DDS).  Performance is evaluated a bit differently, though.  Instead of absolute errors in the time domain we usually want to know the amplitude of the spurs relative to the intended sinusoid, for a given computational complexity.
Exactly: the problem domain is quite different.

While I have fiddled with trackers (music software), especially the low-down details of resampling, I'm not very familiar with direct digital synthesis, especially not on the algorithmic front.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15192
  • Country: fr
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #15 on: June 18, 2024, 08:33:10 am »
Generating a sine wave incrementally, and computing arbitrary values are two use cases that are both useful in practice. The former is more for implementing some kind of oscillator, and a rotation-based algo allows to change the frequency very easily, so that makes it attractive compared to storing a pre-computed period, and it takes a lot less memory.

We had discussed many options too, as I remember, in a previous thread, but I can't find it anymore.
 

Online radiolistener

  • Super Contributor
  • ***
  • Posts: 3928
  • Country: ua
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #16 on: June 18, 2024, 11:31:26 am »
9 multiplications and one division to get the sine or cosine of argument, in double precision.

Well, here is my approach: 1 multiplication and 3 additions  :)

Code: [Select]
#define SR 48000        // Sample rate
#define FREQ 1000       // 1000 Hz sine

#define PI  3.1415926535897932384626433832795
#define PW  ((float)FREQ * PI / SR)

static float R = (float)(4 * PW * PW); // for better approximation you can use R = pow(2*sin(PW), 2);
static float V = 0;
static float X = 2000;//4096/3.3 * 0.5;      // init value
static float S = 2048;//1500; // bias offset

uint16_t getSineNextSample() {
V -= X * R;
X += V;
return (uint16_t)(X + S);
}
 

Offline Picuino

  • Super Contributor
  • ***
  • Posts: 1032
  • Country: es
    • Picuino web
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #17 on: June 18, 2024, 11:37:33 am »
By adjusting the ranges of each polynomial series a little more, a smaller error over the entire x-range of the function is achieved.

 - 4 multiplications per calc
 - Error max: 3e-7

Python program:
Code: [Select]
import numpy as np
import matplotlib.pyplot as plt


def main():
    nodes = 7
    xmax = 0.10546875
    calc_sin(nodes, xmax)
    calc_cos(nodes, 0.25-xmax)
   

def calc_cos(nodes, xmax):
    x_nodes = cheby_nodes(-xmax, +xmax, nodes)
    y_nodes = cos(x_nodes)

    pol = np.polyfit(x_nodes, y_nodes, nodes-1)
    for i in range(1, nodes, 2):
        pol[i] = 0
    print()
    for i in range(0, nodes, 2):
        print(f"   float p{i} = {pol[i]:+11.8e};")

    xx = np.linspace(-xmax, xmax, 500)
    yy = np.polyval(pol, xx) - cos(xx)
   
    plt.plot(xx, yy)
    plt.xlabel('cos(x)')
    plt.show()


def calc_sin(nodes, xmax):
    x_nodes = cheby_nodes(-xmax, +xmax, nodes)
    y_nodes = sin(x_nodes)

    pol = np.polyfit(x_nodes, y_nodes, nodes-1)
    for i in range(0, nodes, 2):
        pol[i] = 0
    print()
    for i in range(1, nodes, 2):
        print(f"   float p{i} = {pol[i]:+11.8e};")

    xx = np.linspace(-xmax, xmax, 500)
    yy = np.polyval(pol, xx) - sin(xx)
   
    plt.plot(xx, yy)
    plt.xlabel('sin(x)')
    plt.show()


def cos(x):
    return np.cos(x * 2 * np.pi)

def sin(x):
    return np.sin(x * 2 * np.pi)


def cheby_nodes(a, b, nodes):
    i = np.array(range(nodes))
    x = np.cos((2 * i + 1) * np.pi / (2 * nodes))
    return 0.5 * (b - a) * x + 0.5 * (b + a)


main()


c program:
Code: [Select]
#include <math.h>
#include <stdio.h>

float sin_pol(float x);
float sin_pol_r(float x);
float cos_pol_r(float x);


main(void) {
   double x, y1, y2;

   printf("x;sin(x);Error\n", x, y1, y1-y2);
   for(x = -1.0; x <= 1.0; x += 1.0/1000) {
      y1 = sin_pol(x);
      y2 = sin(x * M_PI * 2);
      printf("%+11.8e;%+11.8e;%+11.8e\n", x, y1, y1-y2);
   }
}


float sin_pol(float x) {
    signed char sign = 0;
    if (x < 0) {
        sign = -1;
        x = -x;
    }
    if (x > 0.5) {
        x = x - 0.5;
        if (sign == 0)
            sign = -1;
        else
            sign = 0;
    }
    if (x > 0.25) {
        x = 0.5 - x;
    }
    if (x > 0.10546875) {
        if (sign == 0)
            return cos_pol_r(0.25 - x);
        else
            return -cos_pol_r(0.25 - x);
    }
    else {
        if (sign == 0)
            return sin_pol_r(x);
        else
            return -sin_pol_r(x);
    }
}


float cos_pol_r(float x) {
   float p0 = -8.32795518e+01;
   float p2 = +6.49167315e+01;
   float p4 = -1.97391497e+01;
   float p6 = +1.00000000e+00;
   float xx = x * x;
   return ((p0 * xx + p2) * xx + p4) * xx + p6;
}


float sin_pol_r(float x) {
   float p1 = +8.01233966e+01;
   float p3 = -4.13334793e+01;
   float p5 = +6.28317388e+00;
   float xx = x * x;
   return ((p1 * xx + p3) * xx + p5) * x;
}
« Last Edit: June 18, 2024, 11:39:23 am by Picuino »
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 9291
  • Country: gb
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #18 on: June 18, 2024, 11:46:57 am »
9 multiplications and one division to get the sine or cosine of argument, in double precision.

Well, here is my approach: 1 multiplication and 3 additions  :)

Code: [Select]
#define SR 48000        // Sample rate
#define FREQ 1000       // 1000 Hz sine

#define PI  3.1415926535897932384626433832795
#define PW  ((float)FREQ * PI / SR)

static float R = (float)(4 * PW * PW); // for better approximation you can use R = pow(2*sin(PW), 2);
static float V = 0;
static float X = 2000;//4096/3.3 * 0.5;      // init value
static float S = 2048;//1500; // bias offset

uint16_t getSineNextSample() {
V -= X * R;
X += V;
return (uint16_t)(X + S);
}
Be real careful with that kind of design, if you need a persistent signal. Rounding errors can easily make the output blow up or fade to zero.
 

Offline AVI-crak

  • Regular Contributor
  • *
  • Posts: 133
  • Country: ru
    • Rtos
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #19 on: June 18, 2024, 02:11:15 pm »
https://github.com/AVI-crak/Rtos_cortex/blob/master/math_s.c
6 multiplications, 6 simple additions, and some logic, the error is 1 minor floating point bit.
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #20 on: June 18, 2024, 03:42:12 pm »
Ok, now I'm getting annoyed.

If you want to post your normal sine or cosine approximations, or your code uses doubles, then at least say that they are not about what the original post is about.

If you suggest a solution because maximum error at a few thousand regularly sampled intervals is small, then you are only posting a guess.  To know the error bounds, you have to check all valid float input values.

No ifs, no buts.

https://github.com/AVI-crak/Rtos_cortex/blob/master/math_s.c
6 multiplications, 6 simple additions, and some logic, the error is 1 minor floating point bit.
Nope: only normal sin() and cos() implementations there, with period 2·π.  No sines or cosines with period 1.

If you think you can just divide the argument by two pi and get a good unit period sine or cosine, you're wrong: the errors will be much larger.
 
The following users thanked this post: T3sl4co1l

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: ca
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #21 on: June 18, 2024, 05:47:20 pm »
If you want to post your normal sine or cosine approximations, or your code uses doubles, then at least say that they are not about what the original post is about.

Obviously, any algorithm can be implemented in singles, doubles, fractionals, or whatnot. Also algorithms can be scaled as pleased horizontally or vertically. The scaling changes coefficients, but not the principle.

With normal (Taylor) calculation, the error will depend on the number of coefficients you use. By using more Taylor members you can decrease the error to any arbitrary value. If I remember correctly, for single precision floats you only need 3 Taylor members to calculate sin() with errors smaller than the least significant digit of the IEEE presentation. This produces very compact, fast, and efficient calculation which is hard to beat in any respect.

There are certainly many ways to calculate sin(), which may be interesting to discuss, but in practical terms, Taylor is probably the best.
 

Offline Picuino

  • Super Contributor
  • ***
  • Posts: 1032
  • Country: es
    • Picuino web
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #22 on: June 18, 2024, 06:19:10 pm »
With Taylor you get to approximate a function in such a way that the approximation is more accurate the closer it is to x=0.
If what you want is the error to be constant and minimum in all the approximated range, it is better to use interpolating polynomials choosing the zeros of the polynomial with the chebyshev coefficients.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: ca
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #23 on: June 18, 2024, 06:36:20 pm »
With Taylor you get to approximate a function in such a way that the approximation is more accurate the closer it is to x=0.
If what you want is the error to be constant and minimum in all the approximated range, it is better to use interpolating polynomials choosing the zeros of the polynomial with the chebyshev coefficients.

There are two considerations for numbers close to 0.

Typically, in embedded application you will be concerned with absolute (rather than relative) errors. The fact that the presentation resolution increases as the number gets smaller is an idiosyncrasy of floating point numbers. If you use integers instead, the problem disappears.

Second, as you get closer to zero, sine function gets closer and closer to sin(x) = x which you can use for any number lower than certain threshold.
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #24 on: June 18, 2024, 07:31:53 pm »
Six terms of the sin series expansion (as I listed in reply #6) suffices to get within ±3 ULPs for both unit-period sine and cosine,
    usinf(x) = (float)sin(x * 2 * Pi)
    ucosf(x) = (float)cos(x * 2 * Pi)

(The problem with using sinf(x * 6.2831855f) is that it has lots of error near x = ±1, at x = ±0.99999994f, ±0.9999999f, ±0.9999998f, ±0.99999976f, and so on.  6.2831855f is the closest value to two times pi you can get with floats.)

Example implementation:
Code: [Select]
// x = -0.25f .. 0.25f, returns (float)sin((double)x * 6.283185307179586)
static float usinf_internal(float x) {
    float  xx = x * x;
    return x * (6.2831855f - xx * (41.3417f - xx * (81.60525f - xx * (76.70586f - xx * (42.058693f - xx * 15.094643f)))));
}

// returns (float)sin((double)x * 6.283185307179586) with at most 3 ULPs of error from the expected value
float usinf(float x) {
    // Limit to one period
    x -= (int32_t)x;
    if (x < 0.0f)
        x += 1.0f;

    if (x >= 0.75f)
        return  usinf_internal(x - 1.0f);
    else
    if (x >= 0.25f)
        return -usinf_internal(x - 0.5f);
    else
        return  usinf_internal(x);
}

// returns (float)cos((double)x * 6.283185307179586) with at most 3 ULPs of error from the expected value
float ucosf(float x) {
    // Cosine is an even function
    if (x < 0.0f)
        x = -x;

    // Limit to one full period
    x -= (int32_t)x;

    if (x < 0.5f)
        return -usinf_internal(x - 0.25f);
    else
        return  usinf_internal(x - 0.75f);
}

Here is the error distribution for all non-subnormal float x between 0 and 1, inclusive:
$$\begin{array}{r|r:r|r:r}
~ & \text{usin} & ~ & \text{ucos} & ~ \\
\text{ULPs} & \text{Count} & \text{Fraction} & \text{Count} & \text{Fraction} \\
\hline
-3 &       720 &  0.00\% &     21020 &  0.00\% \\
-2 &    247531 &  0.02\% &   5355330 &  0.51\% \\
-1 &   4785888 &  0.45\% &  56436657 &  5.34\% \\
\hdashline
 0 & 698222628 & 66.06\% & 952910427 & 90.16\% \\
\hdashline
 1 & 353023465 & 33.40\% &  36374217 &  3.44\% \\
 2 &    682383 &  0.06\% &   5835909 &  0.55\% \\
 3 &      1995 &  0.00\% &     31050 &  0.00\% \\
\end{array}$$

This is in all ways superior to the solution in my initial post.
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #25 on: June 18, 2024, 07:51:50 pm »
By using more Taylor members you can decrease the error to any arbitrary value.
No, you cannot.  That only works if you have sufficient precision in your representation of individual terms, compared to that arbitrary error limit.

Essentially, each term needs to have less error (compared to the exact value) than the arbitrary limit; typically less than half the arbitrary limit value.
With floating point types, this means that you generally cannot get less than ±1 ULP of error, and to get within the standard 0.5 ULP of error, you need to use higher precision for the terms.  If you have N terms similar in magnitude, you can expect ±N of error.

Simply put, the problem is that with floats, 10000000 + 0.5 == 10000000.


 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: ca
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #26 on: June 18, 2024, 09:00:08 pm »
By using more Taylor members you can decrease the error to any arbitrary value.
No, you cannot.  That only works if you have sufficient precision in your representation of individual terms, compared to that arbitrary error limit.

Essentially, each term needs to have less error (compared to the exact value) than the arbitrary limit; typically less than half the arbitrary limit value.
With floating point types, this means that you generally cannot get less than ±1 ULP of error, and to get within the standard 0.5 ULP of error, you need to use higher precision for the terms.  If you have N terms similar in magnitude, you can expect ±N of error.

It's a matter of technique. You start from the back (from last members) where floating point numbers have excess precision because the values are much smaller than the final result and then you move towards bigger members. This way the rounding error does not accumulate.
 
The following users thanked this post: SiliconWizard

Offline Postal2

  • Frequent Contributor
  • **
  • Posts: 426
  • Country: ru
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #28 on: June 19, 2024, 06:53:36 am »
It's a matter of technique. You start from the back (from last members) where floating point numbers have excess precision because the values are much smaller than the final result and then you move towards bigger members. This way the rounding error does not accumulate.
Again, no.  That, and similar techniques like Kahan summation, will help you get down to ±1 ULP or so, but no more.

This is because the terms largest in magnitude cannot have too much error for the sum to be within the acceptable error bounds.  Lesser terms cannot compensate for loss of information from the greater terms.

In mathematical terms, we are computing polynomials of form
$$S_n = \sum_{k=0}^n s_k x^k$$
but with floating point types, we actually calculate
$$S_n = \sum_{k=0}^n \left( s_k + \epsilon_{0,k} \right) \left( x^k + \epsilon_{1,k} \right)$$
where \$\epsilon_{0,k}\$ is the error in the coefficient \$s_k\$, and \$\epsilon_{1,k}\$ is error in \$x^k\$.  Expanding the sum, we can separate out the error terms:
$$\begin{aligned}
S_n &= \sum_{k=0}^n s_k x^k \\
~ &+ \sum_{k=0}^n \epsilon_{0,k} x^k \\
~ &+ \sum_{k=0}^n \epsilon_{1,k} s_k \\
~ &+ \sum_{k=0}^n \epsilon_{0,k} \epsilon_{1,k} \\
\end{aligned}$$
The product of the errors \$\epsilon_{0,k} \epsilon_{1,k}\$ is obviously very small, and is generally not a problem at all. 

Because \$s_k\$ are the constant coefficients, \$\epsilon_{0,k}\$ are constant as well; but since \$x^k\$ depends on \$x\$, so does \$\epsilon_{1,k}\$.

That means the error sum \$\sum_{k=0}^n \epsilon_{0,k} x^k\$ depends on \$x\$, as does the error sum \$\sum_{k=0}^n \epsilon_{1,k} s_k\$, but their dependence on \$x\$ is so different, you cannot make them cancel out.

Increasing \$n\$ will not cause the already accrued error to go away.

With fixed point arithmetic, summing \$n\$ terms of \$B\$ bits each will yield a result with \$B + \lceil \log_2 n \rceil\$ bits.  Only one additional bit is needed for proper half-ULP rounding, so generally, you can get the absolute error below any arbitrary limit by using just a few additional bits compared to the absolute error limit.
« Last Edit: June 19, 2024, 08:48:49 am by Nominal Animal »
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #29 on: June 19, 2024, 07:18:08 am »
NorthGuy: To clarify, the techniques you mentioned are excellent, and I too do recommend their use.  Thank you for pointing them out!

I am only arguing that "to an arbitrary limit with simply adding more terms" is not correct, when using limited precision representations like floating-point and fixed-point formats.  Because of the limited precision in each term, there is a limit to how many terms can affect the sum; and no term can "repair" the error in other terms.
 

Offline ali_asadzadeh

  • Super Contributor
  • ***
  • Posts: 1929
  • Country: ca
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #30 on: June 19, 2024, 08:43:17 am »
Nominal Animal, thanks for sharing, would you explain what ±1 ULP means?
ASiDesigner, Stands for Application specific intelligent devices
I'm a Digital Expert from 8-bits to 64-bits
 

Offline Picuino

  • Super Contributor
  • ***
  • Posts: 1032
  • Country: es
    • Picuino web
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #31 on: June 19, 2024, 08:59:43 am »
...
over all \$1,056,964,609\$ normal (non-subnormal) IEEE 754 single-precision (float aka Binary32) values between 0 and 1.
How do you manage to test all floating values between 0 and 1? (If you post the test program, the rest of us can test our routines with the same pattern).
 

Offline gf

  • Super Contributor
  • ***
  • Posts: 1314
  • Country: de
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #32 on: June 19, 2024, 09:05:19 am »
How do you manage to test all floating values between 0 and 1?

In C/C++, you can use nextafter() or nextafterf() to iterate over representable floating point numbers.
https://en.cppreference.com/w/c/numeric/math/nextafter
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #33 on: June 19, 2024, 09:43:06 am »
Nominal Animal, thanks for sharing, would you explain what ±1 ULP means?
±1 ULP = ±1 Unit in the Least Significant Place.  It is the smallest possible representable change in a floating-point value.

In C, nextafterf(a, HUGE_VALF]) increments a by 1 ULP, yielding the closest value greater than a.
nextafterf(a, -HUGE_VALF]) decrements a by 1 ULP, yielding the closest value greater than a.

The exact size of that change depends on the magnitude of a.

To find out the error in float value compared to the expected nonzero value expect, somewhat similar to value-expect, in units of expected value ULPs, you can use
Code: [Select]
int  float_ulps(const float value, float expect, const int limit) {
    if (expect > value) {
        int  n = 0;
        do {
            n++;
            expect = nextafterf(expect, -HUGE_VALF);
        } while (n < limit && expect > value);
        return -n; // value was below expected, so error is negative

    } else
    if (expect < value) {
        int  n = 0;
        do {
            n++;
            expect = nextafterf(expect, +HUGE_VALF);
        } while (n < limit && expect < value);
        return n; // value was above expected, so error is positive

    } else {
        return 0;
    }
}
This is the most precise way of expressing the error in a value compared to what its type could support.

While one can choose which one to use as the yardstick, I believe the correct or best-known-value is the right yardstick to use.  This means that the ULPs counts I show are with respect to the correct value, and not the calculated value.
Since one is usually only interested in relatively small error ranges, say -15..+15 ULPs, the limit parameter keeps the number of iterations down.

When the expected value is zero, there is no ULP range, and the above will not return anything sensible.  I report such cases separately.



Background:

IEEE 754 Binary32 (float) is a number of format
$$v = (-1)^s \cdot m \cdot 2^{p - 127}$$
where \$v\$ is the logical numerical value, \$s\$ is the sign bit (0 or 1), \$m\$ is the 24-bit mantissa, and \$p\$ is the 8-bit exponent (between 0 and 255).

When \$p = 255\$, \$v\$ is either infinity (\$m = 0\$) or a Not-a-Number (\$m \ne 0\$).  When \$p = 0\$ and \$m = 0\$, \$v\$ is zero; this is why float can represent both +0.0 and -0.0, although the two have equal value.  When \$p = 0\$ but \$m \ne 0\$, \$v\$ is a subnormal value.  For \$1 \le p \le 254\$, \$v\$ is a normal real number.

The most significant bit of the mantissa is 1 if \$p \ge 1\$, and 0 if \$p = 0\$, so it is implicit and not stored at all.  The decimal point is logically between this implicit bit and the 23 bits of the mantissa.

The Unit in Least Significant Place refers to the magnitude of the least significant bit of the mantissa.  Its sign is used to indicate the direction of the error (positive if overshoot, negative for undershoot).  For a float, the magnitude is
$$1\text{ ULP} = 2^{-23} \cdot 2^{p - 127} = 2^{p - 140}$$
where \$p\$ depends on the value –– always the expected value, not the computed/approximate value in my opinion! –– at hand.

If we have IEEE 754 Binary32 floats –– and Intel/AMD x86 and x86-64, ARM Cortex-M4F, Cortex M7F, and RISC-V with hardware floating-point support all do –– and the byte order is the same for float as it is to uint32_t (also, the mentioned architectures do), we can calculate the actual ULPs using the binary representation of the float value:
Code: [Select]
typedef union {
    float   f;
    uint32_t  u;
    int32_t   i;
} word32;

// Distance from target to value, in units of target ULP.
int  float_ulps(const float value, const float expect, int limit) {
    if (correct == value)
        return 0;
    else
    if (correct == 0.0f)
        return (value < 0.0f) ? -16777216 : +16777216;

    const word32  v = { .f = value  };
    const word32  c = { .f = correct };

    // Different signs?
    if ((v.u ^ c.u) & 0x80000000)
        return (v.u & 0x80000000) ? -16777216 : +16777216;

    // Grab the exponent bits.
    uint32_t  ve = v.u & 0x7F800000;
    uint32_t  ce = c.u & 0x7F800000;

    // OR all exponent bits into bit position 24,
    uint32_t  vm = (ve | (ve >> 4)) & 0x07800000;
              vm = (vm | (vm >> 2)) & 0x01800000;
              vm = (vm | (vm >> 1)) & 0x00800000;
    uint32_t  cm = (ce | (ce >> 4)) & 0x07F00000;
              cm = (cm | (cm >> 2)) & 0x01800000;
              cm = (cm | (cm >> 1)) & 0x00800000;

    // Difference in exponents.
    int32_t  shift = (int32_t)(ve >> 23) - (int32_t)(ce >> 23);

    if (shift > 0) {
        if (vm) {
            vm = (vm << shift) | (v.u & 0x007FFFFF);
        } else {
            vm = (v.u & 0x007FFFFF) << shift;
        }
        cm |= (c.u & 0x007FFFFF);
    } else
    if (shift < 0) {
        if (cm) {
            cm = (cm << (-shift)) | (c.u & 0x007FFFFF);
        } else {
            cm = (c.u & 0x007FFFFF) << (-shift);
        }
        vm |= (v.u & 0x007FFFFF);
    } else {
        vm |= (v.u & 0x007FFFFF);
        cm |= (c.u & 0x007FFFFF);
    }

    return (c.u & 0x80000000) ? (int)((int32_t)cm - (int32_t)vm) : (int)((int32_t)vm - (int32_t)cm);
}
This works by constructing the exact bit representation of the two values with the exponent of the expected value, and returning their difference.

The limit is a convenience for histogramming errors, into an array with 2*limit+1 entries, with zero ULPs entry at index limit.  Then, entry at index limit-limit=0 collects all negative errors with larger magnitudes, and entry at index limit+limit=2*limit collects all positive errors with larger magnitudes.

The two float_ulps() implementations yield the exact same answers, when value and expect have the same sign, and expect is nonzero.  Edit: Added the handling for negative value and expect, and fixed the case where 2 ULP difference was occasionally reported as 1 ULP difference due to initial ULP scale being larger.
« Last Edit: June 19, 2024, 06:02:42 pm by Nominal Animal »
 
The following users thanked this post: ali_asadzadeh

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #34 on: June 19, 2024, 01:26:21 pm »
How do you manage to test all floating values between 0 and 1? (If you post the test program, the rest of us can test our routines with the same pattern).
I only hesitate because it is horribly ugly throwaway code I do not want to be judged by.  :-[

I'll clean it up a bit first, okay?  ;)
 

Offline ali_asadzadeh

  • Super Contributor
  • ***
  • Posts: 1929
  • Country: ca
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #35 on: June 19, 2024, 03:18:53 pm »
Quote
±1 ULP = ±1 Unit in the Least Significant Place.  It is the smallest possible representable change in a floating-point value.
Thanks, I learned something new today. >:D
ASiDesigner, Stands for Application specific intelligent devices
I'm a Digital Expert from 8-bits to 64-bits
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: ca
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #36 on: June 19, 2024, 03:20:26 pm »
NorthGuy: To clarify, the techniques you mentioned are excellent, and I too do recommend their use.  Thank you for pointing them out!

I am only arguing that "to an arbitrary limit with simply adding more terms" is not correct, when using limited precision representations like floating-point and fixed-point formats.  Because of the limited precision in each term, there is a limit to how many terms can affect the sum; and no term can "repair" the error in other terms.

Of course, if you screwed up calculation of any one term, adding more terms will not fix that no matter how accurate they are. This is self-evident.

In real world tasks, there's a certain precision that you must attain with your calculation, so you do whatever it takes to ensure you maintain the desired precision throughout your calculation. For example, not long time ago I had to calculate sin()/cos() with high precision for coordinate transform using fractionals. To do so, I had to retain extra bits in the calculation and then chop them off at the end.

How did I know how many bits to retain? Purely empirical. I just had tested all the possible numbers and made sure the result is correct. This way I had determined how many bits to retain and how many terms to use as well.

Using more bits and using more terms are different things and cannot compensate for each other. No matter how many bits you retain, if you don't use enough terms, your calculation will lack precision. Similarly, if you don't use enough bits, your calculation will not be precise no matter how many terms you use.

 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #37 on: June 19, 2024, 04:14:31 pm »
In the mean time, here is a command-line interface for examining values and storage formats of floats.  It only cursorily checks that the implementation uses IEEE 754 Binary32 float: it does not detect e.g. GCC -fno-signed-zeros and such.
Code: [Select]
// SPDX-License-Identifier: CC0-1.0
// Nominal Animal, 2024
//
// Compile using for example
//    gcc -Wall -O2 -std=c11  this.c  -lm -o  this.exe

#define  _ISOC99_SOURCE
#include <stdlib.h>
#include <inttypes.h>
#include <string.h>
#include <stdio.h>
#include <math.h>
#include <errno.h>

typedef union {
    uint32_t  u;
    int32_t   i;
    float     f;
} word32;

// Returns NULL float uses the expected storage format, error text otherwise.
static const char *verify_float_format(void) {

    // Size must match uint32_t.
    if (sizeof (float) != sizeof (uint32_t))
        return "Invalid float size";

    // Encoding.
    if (((word32){ .u = 0x3f5d7b99 }).f !=  0.8651672f ||
        ((word32){ .u = 0xc0490fdb }).f != -3.1415927f)
        return "Invalid storage format";

    // Subnormal reachable from zero.
    if (nextafterf(0.0f, +1.0f) != ((word32){ .u = 0x00000001 }).f ||
        nextafterf(0.0f, -1.0f) != ((word32){ .u = 0x80000001 }).f)
        return "nextafterf() does not support subnormals";

    // All tests passed.
    return NULL;
}

static const struct {
    const char      *name;
    const char      *desc;
    const word32     value;
} constant[] = {
    { .name = "nan",    .value = { .u = 0x7F800001 }, .desc = "   Not a Number" },
    { .name = "inf",    .value = { .u = 0x7F800000 }, .desc = "   Infinity" },
    { .name = "max",    .value = { .u = 0x7F7FFFFF }, .desc = " = 3.4028235e+38, largest finite float" },
    { .name = "twopi",  .value = { .u = 0x40C90FDB }, .desc = " = 6.2831855" },
    { .name = "pi",     .value = { .u = 0x40490FDB }, .desc = " = 3.1415927" },
    { .name = "e",      .value = { .u = 0x402DF854 }, .desc = " = 2.7182817" },
    { .name = "halfpi", .value = { .u = 0x3FC90FDB }, .desc = " = 1.5707964 = pi / 2" },
    { .name = "one",    .value = { .u = 0x3F800000 }, .desc = " = 1" },
    { .name = "inve",   .value = { .u = 0x3EBC5AB2 }, .desc = " = 0.36787945 = 1/e" },
    { .name = "min",    .value = { .u = 0x00800000 }, .desc = " = 1.1754944e-38, smallest normal float" },
    { .name = "submax", .value = { .u = 0x007FFFFF }, .desc = " = 1.1754942e-38, largest subnormal float" },
    { .name = "submin", .value = { .u = 0x00000001 }, .desc = " = 1e-45, smallest subnormal float" },
    { .name = "zero",   .value = { .u = 0x80000000 }, .desc = " = 0" },
};
#define  CONSTANTS  (sizeof constant / sizeof constant[0])

#ifndef  WHITESPACE
#define  WHITESPACE  "\t\n\v\f\r "
#endif

static inline int separator_at(const char *s) {
    if (!s)
        return 0;
    return (*s == '\0' || *s == '\t' || *s == '\n' ||
            *s == '\v' || *s == '\f' || *s == '\r' ||
            *s == ' '  ||
            *s == '+'  || *s == '-' ||
            *s == '*'  || *s == '/' ||
            *s == '^'  ||
            *s == '~'  || *s == '_');
}

static const char *matches(const char *from, const char *name) {
    if (!from || !*from || !name || !*name)
        return NULL;

    while (1) {
        const int  f = (*from >= 'A' && *from <= 'Z') ? (*from - 'A' + 'a') : *from;
        if (f == '\0')
            return (separator_at(name)) ? from : NULL;
        if (*name == '\0')
            return (separator_at(from)) ? from : NULL;
        if (f != *name)
            return NULL;
        from++;
        name++;
    }
}

static const char *parse_suffixes(const char *from, word32 *to) {
    if (!from)
        return NULL;

    while (1)
        switch (*from) {
        case '~': // Post-increment
            if (to)
                to->f = nexttowardf(to->f, HUGE_VAL);
            from++;
            break;

        case '_': // Post-decrement
            if (to)
                to->f = nexttowardf(to->f, -HUGE_VAL);
            from++;
            break;

        default:
            return from;
        }
}

const char *parse_int(const char *from, int *to) {
    const char *ends = NULL;
    long        val;

    if (!from)
        return NULL;

    errno = 0;
    val = strtol(from, (char **)&ends, 0);
    if (errno)
        return NULL;
    if (!ends || ends == from || !separator_at(ends))
        return NULL;

    if ((long)((int)val) != val)
        return NULL;

    if (to)
        *to = (int)val;
    return ends;
}

const char *parse_part32(const char *from, word32 *to) {
    const char *ends = NULL;
    word32      tmp;

    // Fail if nothing to parse.
    if (!from)
        return NULL;

    from += strspn(from, WHITESPACE);
    if (!*from)
        return NULL;

    // Hexadecimal storage representation?
    if (from[0] == '0' && (from[1] == 'x' || from[1] == 'X')) {
        do {
            errno = 0;
            unsigned long  val = strtoul(from + 2, (char **)&ends, 16);
            if (errno || !ends || ends == from + 2)
                break;
            if ((unsigned long)((uint32_t)val) != val)
                break;

            tmp.u = val;
            ends = parse_suffixes(ends, &tmp);
            if (!separator_at(ends))
                break;

            if (to)
                *to = tmp;
            return ends;

        } while (0);
    }

    // A known constant?
    for (size_t i = 0; i < CONSTANTS; i++) {
        if (!constant[i].name)
            continue;

        uint32_t  negative = 0x00000000;
        ends = from;
        while (*ends == '+' || *ends == '-')
            if (*(ends++) == '-')
                negative ^= 0x80000000;

        ends = matches(from, constant[i].name);
        if (!ends)
            continue;

        tmp.u = negative ^ constant[i].value.u;

        ends = parse_suffixes(ends, &tmp);
        if (!separator_at(ends)) {
            ends = NULL;
            continue;
        }

        if (to)
            *to = tmp;
        return ends;
    }
    ends = NULL;

    // A floating-point numeric value?
    errno = 0;
    double  val = strtod(from, (char **)&ends);
    if (errno)
        return NULL;
    if (!ends || ends == from)
        return NULL;

    // Value is nonzero, but rounds to zero?
    if (val != 0.0 && (float)val == 0.0f)
        return NULL;

    tmp.f = val;
    ends = parse_suffixes(ends, &tmp);
    if (!separator_at(ends))
        return NULL;

    if (to)
        *to = tmp;
    return ends;
}

int parse_word32(const char *from, word32 *const to, const char **errp) {
    const char  *dummy;
    word32       have, temp;
    int          p;

    // Clear error pointer
    if (errp)
        *errp = NULL;
    else
        errp = &dummy;

    if (!from || !*from)
        return -1; // Nothing to parse.

    *errp = from; from = parse_part32(from, &have);
    if (!from)
        return -1; // Failed to parse.

    while (1) {

        // Skip whitespace.
        from += strspn(from, WHITESPACE);
        *errp = from;

        switch (*from) {

        case '\0': // Completed.
            *errp = NULL;
            if (to)
                *to = have;
            return 0;

        case '+': // Addition
            from = parse_part32(from + 1, &temp);
            if (!from)
                return -1;

            have.f += temp.f;
            break;

        case '-': // Substraction
            from = parse_part32(from + 1, &temp);
            if (!from)
                return -1;

            have.f -= temp.f;
            break;

        case '*': // Multiplication
            from = parse_part32(from + 1, &temp);
            if (!from)
                return -1;

            have.f *= temp.f;
            break;

        case '/': // Division
            from = parse_part32(from + 1, &temp);
            if (!from)
                return -1;

            have.f /= temp.f;
            break;

        case '^': // Multiply by a power of two
            from = parse_int(from + 1, &p);
            if (!from)
                return -1;

            have.f = ldexpf(have.f, p);

            *errp = from;
            from = parse_suffixes(from, &have);
            if (!from)
                return -1;

            break;

        default:
            return -1;
        }
    }
}

typedef  struct {
    char buf[256];
} buffer;

const char *word32_exact_float(buffer *const where, const word32 value) {
    int len = snprintf(where->buf, sizeof where->buf, "%.128f", value.f);
    if (len < 1 || (size_t)len >= sizeof where->buf)
        return "(ERROR)";

    while (len > 1 && where->buf[len-1] == '0' && where->buf[len-2] >= '0' && where->buf[len - 2] <= '9')
        where->buf[--len] = '\0';

    return (const char *)(where->buf);
}

const char *word32_pattern(buffer *const where, const word32 value) {
    char     *p = where->buf + sizeof where->buf;
    uint32_t  u = value.u;

    *(--p) = '\0';

    // 23 mantissa bits
    for (int i = 0; i < 23; i++) {
        *(--p) = '0' + (u & 1);
        u >>= 1;
    }

    // Implicit bit.
    if ((value.u & 0x7F800000) == 0x00000000)
        *(--p) = '0';
    else
    if ((value.u & 0x7F800000) < 0x7F800000)
        *(--p) = '1';
    else
        *(--p) = ' ';  // Not shown for NaNs and Infs

    *(--p) = ' ';

    // 8 exponent bits
    for (int i = 0; i < 8; i++) {
        *(--p) = '0' + (u & 1);
        u >>= 1;
    }

    *(--p) = ' ';

    // Sign bit.
    *(--p) = '0' + (u & 1);
    return (const char *)p;
}

const char *word32_float(buffer *const where, const word32 value) {
    // Use scientific notation for very large and very small values.
    const char  *format = (value.f < -100000000.0f || value.f > +100000000.0f ||
                          (value.f >= -0.000000001f && value.f <= 0.000000001f) ) ? "%.*g" : "%.*f";
    int          decimals = 1;

    while (1) {

        int  len = snprintf(where->buf, sizeof where->buf, format, decimals, value.f);
        if (len < 1 || (size_t)len >= sizeof where->buf)
            return "(ERROR)";

        errno = 0;
        char *end = where->buf;
        float val = strtod(end, &end);
        if (errno || end == where->buf || *end)
            return "(ERROR)";

        if (val == value.f)
            return (const char *)(where->buf);

        decimals++;
        if (decimals >= (int)sizeof where->buf)
            return "(ERROR)";
    }
}

int usage(const char *arg0, int status) {
    fprintf(stderr, "\n");
    fprintf(stderr, "Usage: %s [ -h | --help ]\n", arg0);
    fprintf(stderr, "       %s FLOAT [ FLOAT ... ]\n", arg0);
    fprintf(stderr, "Where each FLOAT can be:\n");
    fprintf(stderr, "       VALUE [ OP VALUE ]\n");
    fprintf(stderr, "where OP is\n");
    fprintf(stderr, "       + for addition,\n");
    fprintf(stderr, "       - for substraction,\n");
    fprintf(stderr, "       * for multiplication,\n");
    fprintf(stderr, "       / for division, and\n");
    fprintf(stderr, "       ^ for multiplying with a power of two,\n");
    fprintf(stderr, "and each VALUE may be suffixed with zero or more of\n");
    fprintf(stderr, "       _ (underscore) to decrement by 1 ULP\n");
    fprintf(stderr, "       ~ (tilde) to increment by 1 ULP\n");
    fprintf(stderr, "possibly repeated. Order of operations is strictly left to right.\n");
    fprintf(stderr, "\n");
    fprintf(stderr, "Known constants:\n");
    for (size_t i = 0; i < CONSTANTS; i++)
        fprintf(stderr, "%15s %s\n", constant[i].name, constant[i].desc);
    fprintf(stderr, "\n");
    exit(status);
}

int main(int argc, char *argv[]) {
    const char *arg0 = (argc > 0 && argv && argv[0] && argv[0][0]) ? argv[0] : "(this)";

    const char *bad_format = verify_float_format();
    if (bad_format) {
        fprintf(stderr, "Detected non-IEEE 754 Binary32 'float' format: %s.\n", bad_format);
        exit(EXIT_FAILURE);
    }

    // If run without arguments, display usage.
    if (argc < 2)
        usage(arg0, EXIT_SUCCESS);

    // If any of the arguments asks for help, display usage.
    for (int arg = 1; arg < argc; arg++)
        if (!strcmp(argv[arg], "-h") || !strcmp(argv[arg], "--help"))
            usage(arg0, EXIT_SUCCESS);

    // Describe input float values.
    for (int arg = 1; arg < argc; arg++) {
        const char *err = NULL;
        buffer      buf;
        word32      val;

        if (parse_word32(argv[arg], &val, &err)) {
            fprintf(stderr, "%s: Invalid expression.\n", (err) ? err : argv[arg]);
            exit(EXIT_FAILURE);
        }
        const int   fpc = fpclassify(val.f);

        printf("Input: %s\n", argv[arg]);
        printf("Value: %s\n", word32_float(&buf, val));
        printf("Exact: %s\n", word32_exact_float(&buf, val));
        printf("Class:");
        if (signbit(val.f))
            printf(" negative");
        if (isfinite(val.f))
            printf(" finite");
        if (isnormal(val.f))
            printf(" normal");
        if (fpc == FP_ZERO)
            printf(" zero");
        if (fpc == FP_NAN)
            printf(" nan");
        if (fpc == FP_INFINITE)
            printf(" inf");
        if (fpc == FP_SUBNORMAL)
            printf(" subnormal");
        printf("\n");
        printf("Bits:  %s\n", word32_pattern(&buf, val));
        printf("Hex:   0x%08x\n\n", (unsigned int)(val.u));
        fflush(stdout);
    }

    return EXIT_SUCCESS;
}

For example, to find out what is the smallest float greater than zero, run it with "1~" as an argument, and it should print
    Input: 1~
    Value: 1.0000001
    Exact: 1.00000011920928955078125
    Class: finite normal
    Bits:  0 01111111 100000000000000000000001
    Hex:   0x3f800001

Note that the Bits: row includes the implicit mantissa bit for finite (normal and subnormal and zero) values; it is omitted for infinities and NaNs.

Run without parameters to see usage information.

I use this to examine the storage representation of specific float values (Bits: and Hex: lines), to see the minimum decimal representation matching them (Value: line), and examining the results of simple arithmetic operations.

The reason this is a command-line tool and not say a HTML+CSS tool one can use in a browser, is that browsers don't have IEEE-754 Binary32 environment (we'd have to synthesize one, yuk), but most desktops and servers and SBCs do.
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #38 on: June 19, 2024, 06:36:20 pm »
This float function/approximation accuracy checking code is ugly, but it will do the job.  Apologies for your eyes.

We put each tested function and known correct function in a separate file, say funcs.c:
Code: [Select]
// SPDX-License-identifier: CC0-1.0
// Nominal Animal, 2024
//
// sin(2*Pi*x) -- unit-period sin function
//

#include <math.h>

float expected_value(float x) {
    x -= (int)x;

    if (x == -1.0f || x == -0.5f || x == 0.0f || x == 0.5f || x == 1.0f)
        return 0.0f;
    else
    if (x == -0.75f || x == 0.25f)
        return 1.0f;
    else
    if (x == -0.25f || x == 0.75f)
        return -1.0f;
    else {
        // Identical results with (float)sin((long double)x * 6.283185307179586476925286766559005768394338798750211641949889185L);
        return (float)sin((double)x * 6.283185307179586);
    }
}

static inline float usinf_internal(float x) {
    float  xx = x * x;
    return x * (6.2831855f - xx * (41.3417f - xx * (81.60525f - xx * (76.70586f - xx * (42.058693f - xx * 15.094643f)))));
}

float computed_value(float x) {
    x -= (int)x;
    if (x < 0.0f)
        x += 1.0f;

    if (x >= 0.75f)
        return  usinf_internal(x - 1.0f);
    else
    if (x >= 0.25f)
        return -usinf_internal(x - 0.5f);
    else
        return  usinf_internal(x);
}
Function
    float  expected_value(float x);
must return the float we compare to, and function
    float  computed_value(float x);
is your own implementation of it.

The verification code is in a separate file, say bench.c:
Code: [Select]
// SPDX-License-Identifier: CC0-1.0
// Nominal Animal, 2024
//
// Compile using for example
//     gcc -Wall -O2 -c  bench.c
//     gcc -Wall -O2 -c  funcs.c
//     gcc -Wall -O2     bench.o funcs.o   -lm -o   bench.exe
//
// To verify the ULPs calculation, use
//     gcc -DCHECK_ULPS -Wall -O2 -c  this.c

#define _ISOC99_SOURCE
#include <stdlib.h>
#include <inttypes.h>
#include <stdio.h>
#include <math.h>

// Functions in the separate source file
extern float  expected_value(float x);
extern float  computed_value(float x);

// Range of ULPs that matters; determines how slow delta_ulps_slow() can be
#ifndef  MAX_ULPS
#define  MAX_ULPS  16777216
#endif

// Number of bins per direction in the histogram
#ifndef  ULP_RANGE
#define  ULP_RANGE  512
#endif

static uint32_t  histogram_bin_sum = 0;
static uint32_t  histogram_bin[2*ULP_RANGE+1] = { 0 };

// Add to histogram
static inline int  histogram_add(int ulp) {
    if (ulp < -ULP_RANGE) ulp = -ULP_RANGE;
    if (ulp > +ULP_RANGE) ulp = +ULP_RANGE;
    histogram_bin_sum++;
    histogram_bin[ulp + ULP_RANGE]++;
    return ulp;
}

// Number of hits in a specific bin in the histogram
static inline unsigned int  histogram_count(const int ulp) {
    if (ulp >= -ULP_RANGE && ulp <= +ULP_RANGE)
        return histogram_bin[ulp + ULP_RANGE];
    else
        return 0;
}

// Fraction of hits in a specific bin in the histogram
static inline double  histogram_fraction(const int ulp) {
    if (!histogram_bin_sum)
        return 0.0;
    else
    if (ulp >= -ULP_RANGE && ulp <= +ULP_RANGE)
        return (double)histogram_bin[ulp + ULP_RANGE] / (double)histogram_bin_sum;
    else
        return 0.0;
}

// Difference in units of ULP in correct
static inline int delta_ulps_slow(const float value, float correct) {
    if (correct == 0.0f) {
        if (value > 0.0f)
            return  MAX_ULPS;
        else
        if (value < 0.0f)
            return -MAX_ULPS;
        else
            return 0;
    } else
    if (correct > value) {
        int  n = 0;
        do {
            n++;
            correct = nextafterf(correct, -HUGE_VALF);
        } while (correct > value && n < MAX_ULPS);
        return -n;

    } else
    if (correct < value) {
        int  n = 0;
        do {
            n++;
            correct = nextafterf(correct, +HUGE_VALF);
        } while (correct < value && n < MAX_ULPS);
        return n;

    } else
        return 0;
}

typedef union {
    float       f;
    uint32_t    u;
    int32_t     i;
} word32;

static inline int32_t  delta_ulps_fast(const float value, const float correct) {
    if (correct == value)
        return 0;
    else
    if (correct == 0.0f)
        return (value < 0.0f) ? -MAX_ULPS : +MAX_ULPS;

    const word32  v = { .f = value  };
    const word32  c = { .f = correct };

    // Different signs?
    if ((v.u ^ c.u) & 0x80000000)
        return (v.u & 0x80000000) ? -MAX_ULPS : +MAX_ULPS;

    // Grab the exponent bits.
    uint32_t  ve = v.u & 0x7F800000;
    uint32_t  ce = c.u & 0x7F800000;

    // OR all exponent bits into bit position 24,
    uint32_t  vm = (ve | (ve >> 4)) & 0x07800000;
              vm = (vm | (vm >> 2)) & 0x01800000;
              vm = (vm | (vm >> 1)) & 0x00800000;
    uint32_t  cm = (ce | (ce >> 4)) & 0x07F00000;
              cm = (cm | (cm >> 2)) & 0x01800000;
              cm = (cm | (cm >> 1)) & 0x00800000;

    // Difference in exponents.
    int32_t  shift = (int32_t)(ve >> 23) - (int32_t)(ce >> 23);

    if (shift > 0) {
        if (vm) {
            vm = (vm << shift) | (v.u & 0x007FFFFF);
        } else {
            vm = (v.u & 0x007FFFFF) << shift;
        }
        cm |= (c.u & 0x007FFFFF);
    } else
    if (shift < 0) {
        if (cm) {
            cm = (cm << (-shift)) | (c.u & 0x007FFFFF);
        } else {
            cm = (c.u & 0x007FFFFF) << (-shift);
        }
        vm |= (v.u & 0x007FFFFF);
    } else {
        vm |= (v.u & 0x007FFFFF);
        cm |= (c.u & 0x007FFFFF);
    }

    return (c.u & 0x80000000) ? (int)((int32_t)cm - (int32_t)vm) : (int)((int32_t)vm - (int32_t)cm);
}

// Check argument x
static void check(const float x) {
    const float  ev = expected_value(x);
    const float   v = computed_value(x);

    const int     u = delta_ulps_fast(v, ev);

#ifdef CHECK_ULPS
    const int    fu = delta_ulps_slow(v, ev);
    if (u != fu)
        fprintf(stderr, "\r%d != %d ULPs, %.12g != %.12g\n", fu, u, v, ev);
#endif

    histogram_add(u);
}

int main(void) {

    if (sizeof (float) != 4 ||
        ((word32){ .u = 0x3f5d7b99 }).f !=  0.8651672f ||
        ((word32){ .u = 0xc0490fdb }).f != -3.1415927f ||
        nextafterf(0.0f, +1.0f) != ((word32){ .u = 0x00000001 }).f ||
        nextafterf(0.0f, -1.0f) != ((word32){ .u = 0x80000001 }).f) {
        fprintf(stderr, "Your 'float' type is not IEEE 754 Binary32 with native byte order.\n");
        exit(EXIT_FAILURE);
    }



    // n_per_cent = 100.0 / estimated_number_of_cases
    const double  n_per_cent = 100.0 / 1056964609.0;
    // n_step = estimated_number_of_cases / 100
    const unsigned int  n_delta = 10569646;
    unsigned int        n_next  = n_delta;
    unsigned int        n       = 0;

    fprintf(stderr, "Checking .. ");
    fflush(stderr);

    // Check zero separately.
    check(0.0f); n++;
    for (float x = 1.0f; isnormal(x); n++, x = nextafterf(x, -1.0f)) {
        check(x);

        // Progress update
        if (n >= n_next) {
            n_next += n_delta;
            fprintf(stderr, "\r%.1f%% checked .. ", n * n_per_cent);
            fflush(stderr);
        }
    }
    fprintf(stderr, "\rChecking complete.  \n");
    fflush(stderr);

    // Drop zero tails from the histogram
    int  ulp_min = -ULP_RANGE;
    while (ulp_min < 0 && !histogram_count(ulp_min)) ulp_min++;

    int  ulp_max = ULP_RANGE;
    while (ulp_max > 0 && !histogram_count(ulp_max)) ulp_max--;

    printf("Report on differences between calculated and expected results,\n");
    printf("among all normal floats between 0.0 and 1.0, inclusive.\n");
    printf("Subnormal floats (2**-149 to 2**-126) were not tested.\n");
    printf("\n");

    printf("ULPs       Count  Fraction\n");
    for (int ulp = ulp_max; ulp >= ulp_min; ulp--) {
        const char *more = (ulp == -ULP_RANGE) ? "+" :
                           (ulp == +ULP_RANGE) ? "+" : " ";

        printf("%+3d%-2s %10u  %5.1f%%\n", ulp, more,
               histogram_count(ulp), 100.0 * histogram_fraction(ulp));
    }

    return EXIT_SUCCESS;
}
This way, you can use the same bench.c for many different tests, by compiling all at once, then linking each test case into a separate program, to ensure fair testing code.  For testing positive and negative values, you can just add check(-0.0f); and check(-x); next to the corresponding unsigned ones.  For other ranges, take a look at the float tool in my previous post.

Compile both files, and link against the math library.  I use GCC, and used
   gcc -Wall -O2 -std=c11 -pedantic -DCHECK_ULPS -c bench.c
   gcc -Wall -O2 -std=c11 -pedantic -c funcs.c
   gcc -Wall -O2 -std=c11 -pedantic bench.o funcs.o -lm -o bench.exe
You can omit the -DCHECK_ULPS, as it just verifies the binary ULPS calculation matches the slow loop version.

For the implemented 6-term unit period sine function included in funcs.c, the expected output is
    Report on differences between calculated and expected results,
    among all normal floats between 0.0 and 1.0, inclusive.
    Subnormal floats (2**-149 to 2**-126) were not tested.
   
    ULPs       Count  Fraction
     +3         1995    0.0%
     +2       682383    0.1%
     +1    353023465   33.4%
     +0    698222628   66.1%
     -1      4785888    0.5%
     -2       247531    0.0%
     -3          720    0.0%

To keep the program as portable as possible, it is single-threaded, and contains no GNU/Linux-isms.  It does report progress to standard output, using carriage return to overwrite the line instead of printing consecutive lines.  On a single thread on Intel Core i5 7200U laptop, it takes about a minute to run.
 

Offline Tation

  • Regular Contributor
  • *
  • Posts: 81
  • Country: pt
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #39 on: June 19, 2024, 07:52:59 pm »
To find out the error in float value compared to the expected nonzero value expect, somewhat similar to value-expect, in units of expected value ULPs, you can use
Code: [Select]
int  float_ulps(const float value, float expect, const int limit) {
    if (expect > value) {
        int  n = 0;
        do {
            n++;
            expect = nextafterf(expect, -HUGE_VALF);
        } while (n < limit && expect > value);
        return -n; // value was below expected, so error is negative

    } else
    if (expect < value) {
        int  n = 0;
        do {
            n++;
            expect = nextafterf(expect, +HUGE_VALF);
        } while (n < limit && expect < value);
        return n; // value was above expected, so error is positive

    } else {
        return 0;
    }
}
This is the most precise way of expressing the error in a value compared to what its type could support.

The difference in ULPs between two floats can also be computed (instead of "walked") using the fact that the IEEE754 format is simply a count (in sign and magnitude) of ULPs, thus:
Code: [Select]
typedef union {
    float       f;
    uint32_t    u;
    int32_t     i;
} word32;

// Difference in units of ULP in correct
static inline int delta_ulps_slow(const float value, float correct) {
    if (correct == 0.0f) {
        if (value > 0.0f)
            return  MAX_ULPS;
        else
        if (value < 0.0f)
            return -MAX_ULPS;
        else
            return 0;
    } else {
      word32  v = {.f = value};
      word32  c = {.f = correct};
      // from sign + magnitude to two's complement (32 bit)
      int32_t iv = (v.u & 0x80000000U) ? 0x80000000U - v.u : v.u;
      int32_t ic = (c.u & 0x80000000U) ? 0x80000000U - c.u : c.u;
      return iv - ic;
    }
}

On my machine it gets the very same ULP histogram.

The same trick can be used to compute nextafterf(), simply increment (or decrement) the float bits as if they were an integer.
 

Offline Picuino

  • Super Contributor
  • ***
  • Posts: 1032
  • Country: es
    • Picuino web
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #40 on: June 19, 2024, 08:01:04 pm »
Functions with 4 multiplications:
Code: [Select]
#include <math.h>

float expected_value(float x) {
    x -= (int)x;

    if (x == -1.0f || x == -0.5f || x == 0.0f || x == 0.5f || x == 1.0f)
        return 0.0f;
    else
    if (x == -0.75f || x == 0.25f)
        return 1.0f;
    else
    if (x == -0.25f || x == 0.75f)
        return -1.0f;
    else {
        return (float)sin((double)x * 6.283185307179586);
    }
}


float cos_pol_r(float x) {
   float p0 = -8.32795518e+01;
   float p2 = +6.49167315e+01;
   float p4 = -1.97391497e+01;
   float p6 = +1.00000000e+00;
   float xx = x * x;
   return ((p0 * xx + p2) * xx + p4) * xx + p6;
}


float sin_pol_r(float x) {
   float p1 = +8.01233966e+01;
   float p3 = -4.13334793e+01;
   float p5 = +6.28317388e+00;
   float xx = x * x;
   return ((p1 * xx + p3) * xx + p5) * x;
}


float sinf(float x) {
    if (x > 0.10546875) {
        return cos_pol_r(0.25 - x);
    }
    if (x < -0.10546875) {
        return -cos_pol_r(0.25 + x);
    }
    return sin_pol_r(x);
}


float computed_value(float x) {
    x -= (int)x;
    if (x < 0.0f)
        x += 1.0f;

    if (x >= 0.75f)
        return  sinf(x - 1.0f);
    else {
        if (x >= 0.25f)
            return -sinf(x - 0.5f);
        else
            return  sinf(x);
    }
}


Errors:
Code: [Select]
ULPs       Count  Fraction
+31          178    0.0%
+30         4082    0.0%
+29        10772    0.0%
+28        18004    0.0%
+27        24576    0.0%
+26        24842    0.0%
+25        24612    0.0%
+24        24110    0.0%
+23        27634    0.0%
+22        62334    0.0%
+21        82810    0.0%
+20        71706    0.0%
+19        64390    0.0%
+18        60182    0.0%
+17        56298    0.0%
+16        54276    0.0%
+15        42968    0.0%
+14        15250    0.0%
+13          270    0.0%
+12        54024    0.0%
+11       233896    0.0%
+10        98582    0.0%
 +9        70436    0.0%
 +8        59416    0.0%
 +7       237589    0.0%
 +6      1807455    0.2%
 +5      1860700    0.2%
 +4      1466285    0.1%
 +3      2443116    0.2%
 +2      3438085    0.3%
 +1      5877224    0.6%
 +0      8416375    0.8%
 -1      5555953    0.5%
 -2      3939831    0.4%
 -3      1173737    0.1%
 -4       365389    0.0%
 -5       649810    0.1%
 -6       733024    0.1%
 -7       510590    0.0%
 -8       533905    0.1%
 -9       712209    0.1%
-10      1541060    0.1%
-11      3971406    0.4%
-12       922841    0.1%
-13         9051    0.0%
-14       545256    0.1%
-15     39775223    3.8%
-16     85320785    8.1%
-17     85611062    8.1%
-18     86179561    8.2%
-19     85710897    8.1%
-20     85607309    8.1%
-21     85997833    8.1%
-22     84990193    8.0%
-23     81698824    7.7%
-24     50861056    4.8%
-25     42608121    4.0%
-26     42566959    4.0%
-27     42791517    4.0%
-28     42557141    4.0%
-29     42043804    4.0%
-30     24464613    2.3%
-31       283173    0.0%
 

Offline Picuino

  • Super Contributor
  • ***
  • Posts: 1032
  • Country: es
    • Picuino web
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #41 on: June 19, 2024, 08:03:36 pm »
Functions with 5 multiplications and one coefficient more:
Code: [Select]
#include <math.h>

float expected_value(float x) {
    x -= (int)x;

    if (x == -1.0f || x == -0.5f || x == 0.0f || x == 0.5f || x == 1.0f)
        return 0.0f;
    else
    if (x == -0.75f || x == 0.25f)
        return 1.0f;
    else
    if (x == -0.25f || x == 0.75f)
        return -1.0f;
    else {
        return (float)sin((double)x * 6.283185307179586);
    }
}


float cos_pol_r(float x) {
   float p0 = +5.90141572e+01;
   float p2 = -8.54375962e+01;
   float p4 = +6.49392826e+01;
   float p6 = -1.97392086e+01;
   float p8 = +1.00000000e+00;
   float xx = x * x;
   return (((p0 * xx + p2) * xx + p4) * xx + p6) * xx + p8;
}


float sin_pol_r(float x) {
   float p1 = -7.56594865e+01;
   float p3 = +8.15965361e+01;
   float p5 = -4.13416753e+01;
   float p7 = +6.28318528e+00;
   float xx = x * x;
   return (((p1 * xx + p3) * xx + p5) * xx + p7) * x;
}


float sinf(float x) {
    if (x > 0.10546875) {
        return cos_pol_r(0.25 - x);
    }
    if (x < -0.10546875) {
        return -cos_pol_r(0.25 + x);
    }
    return sin_pol_r(x);
}


float computed_value(float x) {
    x -= (int)x;
    if (x < 0.0f)
        x += 1.0f;

    if (x >= 0.75f)
        return  sinf(x - 1.0f);
    else {
        if (x >= 0.25f)
            return -sinf(x - 0.5f);
        else
            return  sinf(x);
    }
}


Errors:
Code: [Select]
ULPs       Count  Fraction
 +2        78193    0.0%
 +1    348612641   33.0%
 +0    705099980   66.7%
 -1      3165782    0.3%
 -2         8014    0.0%
 
The following users thanked this post: Nominal Animal

Offline Tation

  • Regular Contributor
  • *
  • Posts: 81
  • Country: pt
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #42 on: June 19, 2024, 11:25:57 pm »
Functions with 5 multiplications and one coefficient more:

[...]

Errors:
Code: [Select]
ULPs       Count  Fraction
 +2        78193    0.0%
 +1    348612641   33.0%
 +0    705099980   66.7%
 -1      3165782    0.3%
 -2         8014    0.0%

Nice! The best I was able to achieve was ±3 ULP with 5 multiplications to directly compute sine (not sine via cosine, as you did).

My approach was using the Remez algorithm to fit a 9th order odd polinomial to sin(2·π·x) in (0: 0.25) using doubles on this calculation. The weighting function was also sin(2·π·x), so the error was «equiripple in ULPs». Then truncated the double coefficients to floats and then trimmed each coefficient manually ± some ULP in a kind of «manual simulated-annealing». This narrowed the error distribution, but was unable to enhance the initial ±3 ULP error of the approximation.

How did you arrive to such coeffs?
« Last Edit: June 19, 2024, 11:33:48 pm by Tation »
 

Offline nimish

  • Regular Contributor
  • *
  • Posts: 185
  • Country: us
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #43 on: June 20, 2024, 03:34:10 am »
With Taylor you get to approximate a function in such a way that the approximation is more accurate the closer it is to x=0.
If what you want is the error to be constant and minimum in all the approximated range, it is better to use interpolating polynomials choosing the zeros of the polynomial with the chebyshev coefficients.

There are two considerations for numbers close to 0.

Typically, in embedded application you will be concerned with absolute (rather than relative) errors. The fact that the presentation resolution increases as the number gets smaller is an idiosyncrasy of floating point numbers. If you use integers instead, the problem disappears.

Second, as you get closer to zero, sine function gets closer and closer to sin(x) = x which you can use for any number lower than certain threshold.

You should be using remez exchange anyway since the domain is small and fixed.

Chebyshev nodes are optimal or close to it in this case anyway.

And no you do not need to exhaustively check all floats even if there are only 4 billion. You can bound the error via interval estimates.

How does the new interpolation compare to the optimal one from github?
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #44 on: June 20, 2024, 05:54:03 am »
Functions with 5 multiplications and one coefficient more:

Errors:
Code: [Select]
ULPs       Count  Fraction
 +2        78193    0.0%
 +1    348612641   33.0%
 +0    705099980   66.7%
 -1      3165782    0.3%
 -2         8014    0.0%
Confirmed: ±2 ULPs of error.  Excellent work, Picuino! :-+

Edited to add: The associated cosine variant,
Code: [Select]
float computed_value(float x) {
    if (x < 0.0f)
        x = -x;
    x -= (int)x;

    if (x >= 0.5f)
        return -internal_sinf(0.75f - x);
    else
        return  internal_sinf(0.25f - x);
}
performs even slightly better, the error distribution for it (vs. cos(2*Pi*x)) being
Code: [Select]
ULPs       Count  Fraction
 +2        15916    0.0%
 +1      6722522    0.6%
 +0   1045503626   98.9%
 -1      4715505    0.4%
 -2         7041    0.0%

The two internal approximation functions can also be written as
Code: [Select]
static inline float cos_pol_r(float x) {
    const float xx = x * x;
    return 1.0f - xx * (19.739208f - xx * (64.939285f - xx * (85.4376f - xx * 59.014156f)));
}

static inline float sin_pol_r(float x) {
    const float xx = x * x;
    return x * (6.2831855f - xx * (41.341675f - xx * (81.596535f - xx * 75.659485f)));
}
without affecting their results.  I do believe this form may be more familiar to others.  As expected, the coefficients are almost, but not exactly those I listed in reply #6, the series definition coefficients for sin(2*Pi*x) and cos(2*Pi*x).

I, too, would be interested in the procedure you used to obtain these, Picuino!

Also, care to add an explicit // SPDX-License-Identifier: identifier to your code, too, just to be absolutely clear?  The CC0-1.0 I use is basically a copyright waiver, dedicating the code to the public domain.  You don't have to pick that if you don't want to, but I think for this kind of stuff, we should.
« Last Edit: June 20, 2024, 07:25:58 am by Nominal Animal »
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #45 on: June 20, 2024, 07:51:43 am »
On my machine it gets the very same ULP histogram.
For small differences, yes.  The problem is when the exponents differ (by more than ±1).  The one used in the bench.c above should be at least as precise as walking, and when defining CHECK_ULPS (via e.g. -DCHECK_ULPS option for GCC or Clang), it uses both and reports if any counts differ.

The same trick can be used to compute nextafterf(), simply increment (or decrement) the float bits as if they were an integer.
See the float tool in reply #37.  I've also used this for radix-sorting floating point numbers (both Binary32 and Binary64). To convert all finite Binary32/Binary64 representations to unsigned integer order, one must flip the sign bit if clear, all bits (~) if sign bit set.

For vectorized checking, using a V8SI (8×int32 on AVX) with the integer representations shown by the above float tool, conversion to V8SF (8×float) is a no-op, and one can speed up the computation to a ridiculous degree.  When the exponents match and 23 mantissa bits are between L and 2²³-L-1, inclusive, direct subtraction will yield the difference in ULPs, up to ±L ulps.  For V8SI/V8SF, L=8 makes a lot of sense.  The arguments can also be treated as V8SI for all but one out of every 2²³/8 tests (as only one in 2²³ will require a change in exponent).

As each test is also utterly separate, distributing/parallelizing this is trivial –– unless the evaluated function has branches, in which case parallelizing is very difficult.

All interesting testing ranges can be described as one or more contiguous ranges in the storage representation, so a proper bench would support useful ranges as keywords (-1..+1, 0..+1, -Pi/2..+Pi/2, 0..+Pi/2, -Pi..+Pi, 0..+Pi, with and without subnormals), be able to handle arbitrary ranges (via storage representation), and use a runtime-specified number of threads.
« Last Edit: June 20, 2024, 08:04:12 am by Nominal Animal »
 

Offline Tation

  • Regular Contributor
  • *
  • Posts: 81
  • Country: pt
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #46 on: June 20, 2024, 09:23:23 am »
For small differences, yes.  The problem is when the exponents differ (by more than ±1).  The one used in the bench.c above should be at least as precise as walking, and when defining CHECK_ULPS (via e.g. -DCHECK_ULPS option for GCC or Clang), it uses both and reports if any counts differ.

I've checked (not exhaustively, though) both approaches (walking vs. sign+magnitude substract) with exponent differences up to 31 (so that the int counting walk steps doesn't overflow) and always get the same values.

Even with larger exponent differences (with the integer counter overflowing) both approaches get the same (incorrect) value for ULP difference (expected in a machine working in two's complement).

Why do you state that with exponents greater than ±1 the other approach is wrong? As stated IEEE754 is a sign+magnitude count of ULPs, to get the difference in ULP between 2 numbers, "count" (walk) from one to the other or, well, substract. Basically you are counting the number of times you call nextafterf() to reach the other number, but internally nextafterf() is basically integer ++/--.
« Last Edit: June 20, 2024, 03:03:32 pm by Tation »
 
The following users thanked this post: Nominal Animal

Offline Picuino

  • Super Contributor
  • ***
  • Posts: 1032
  • Country: es
    • Picuino web
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #47 on: June 20, 2024, 09:28:22 am »
I, too, would be interested in the procedure you used to obtain these, Picuino!

Also, care to add an explicit // SPDX-License-Identifier: identifier to your code, too, just to be absolutely clear?  The CC0-1.0 I use is basically a copyright waiver, dedicating the code to the public domain.  You don't have to pick that if you don't want to, but I think for this kind of stuff, we should.
I believe that such good results are also partly a matter of luck. What I like most about the method I used is that you can choose the precision you want to achieve with the desired number of multiplications (more or less depending on the desired computational load).

I have used a simple polynomial interpolation of the sine function and the cosine function around the zero point. The points chosen to perform the interpolation are the Chebyshev coefficients. These are obtained by dividing a semicircle into equispaced parts and then calculating the cosine of these angles.

I have implemented it with a Python program that I posted in reply 17.
https://www.eevblog.com/forum/microcontrollers/lightweight-sin(2x)-cos(2x)-for-mcus-with-float-hardware/msg5546027/#msg5546027
I divided the two series (sine and cosine) adjusting them by hand until I found that at the limit x < 27/256 = 0.10546875 the two series had similar errors.

For the license, I don't like CC0 because it doesn't force derivative works to be freely licensed as well. I prefer a license that is free and that forces derivative works to be free as well. For example, the MIT license is simple, not very restrictive and maintains that requirement, besides being more appropriate for programs.

Code: [Select]

"""
   Copyright 2024 Picuino

   Permission is hereby granted, free of charge, to any person
   obtaining a copy of this software and associated documentation
   files (the "Software"), to deal in the Software without
   restriction, including without limitation the rights to use,
   copy, modify, merge, publish, distribute, sublicense, and/or
   sell copies of the Software, and to permit persons to whom the
   Software is furnished to do so, subject to the following
   conditions:

   The above copyright notice and this permission notice shall be
   included in all copies or substantial portions of the Software.

   THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
   EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
   OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
   NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
   HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
   WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
   FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
   OTHER DEALINGS IN THE SOFTWARE.
"""

import numpy as np
import matplotlib.pyplot as plt


def main():
    nodes = 9
    xmax = 27/256
    calc_sin(nodes, xmax)
    calc_cos(nodes, 0.25-xmax)
   

def calc_cos(nodes, xmax):
    x_nodes = cheby_nodes(-xmax, +xmax, nodes)
    y_nodes = cos(x_nodes)

    pol = np.polyfit(x_nodes, y_nodes, nodes-1)
    for i in range(1, nodes, 2):
        pol[i] = 0
    print()
    for i in range(0, nodes, 2):
        print(f"   float p{i} = {pol[i]:+11.8e};")

    xx = np.linspace(-xmax, xmax, 500)
    yy = np.polyval(pol, xx) - cos(xx)
   
    plt.plot(xx, yy)
    plt.xlabel('cos(x)')
    plt.show()


def calc_sin(nodes, xmax):
    x_nodes = cheby_nodes(-xmax, +xmax, nodes)
    y_nodes = sin(x_nodes)

    pol = np.polyfit(x_nodes, y_nodes, nodes-1)
    for i in range(0, nodes, 2):
        pol[i] = 0
    print()
    for i in range(1, nodes, 2):
        print(f"   float p{i} = {pol[i]:+11.8e};")

    xx = np.linspace(-xmax, xmax, 500)
    yy = np.polyval(pol, xx) - sin(xx)
   
    plt.plot(xx, yy)
    plt.xlabel('sin(x)')
    plt.show()


def cos(x):
    return np.cos(x * 2 * np.pi)

def sin(x):
    return np.sin(x * 2 * np.pi)


def cheby_nodes(a, b, nodes):
    i = np.array(range(nodes))
    x = np.cos((2 * i + 1) * np.pi / (2 * nodes))
    return 0.5 * (b - a) * x + 0.5 * (b + a)


main()


Code: [Select]
/*
   Copyright 2024 Picuino

   Permission is hereby granted, free of charge, to any person
   obtaining a copy of this software and associated documentation
   files (the "Software"), to deal in the Software without
   restriction, including without limitation the rights to use,
   copy, modify, merge, publish, distribute, sublicense, and/or
   sell copies of the Software, and to permit persons to whom the
   Software is furnished to do so, subject to the following
   conditions:

   The above copyright notice and this permission notice shall be
   included in all copies or substantial portions of the Software.

   THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
   EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
   OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
   NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
   HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
   WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
   FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
   OTHER DEALINGS IN THE SOFTWARE.
*/

#include <math.h>

float expected_value(float x) {
    x -= (int)x;

    if (x == -1.0f || x == -0.5f || x == 0.0f || x == 0.5f || x == 1.0f)
        return 0.0f;
    else
    if (x == -0.75f || x == 0.25f)
        return 1.0f;
    else
    if (x == -0.25f || x == 0.75f)
        return -1.0f;
    else {
        return (float)sin((double)x * 6.283185307179586);
    }
}


float cos_pol_r(float x) {
   float p0 = +5.90141572e+01;
   float p2 = -8.54375962e+01;
   float p4 = +6.49392826e+01;
   float p6 = -1.97392086e+01;
   float p8 = +1.00000000e+00;
   float xx = x * x;
   return (((p0 * xx + p2) * xx + p4) * xx + p6) * xx + p8;
}


float sin_pol_r(float x) {
   float p1 = -7.56594865e+01;
   float p3 = +8.15965361e+01;
   float p5 = -4.13416753e+01;
   float p7 = +6.28318528e+00;
   float xx = x * x;
   return (((p1 * xx + p3) * xx + p5) * xx + p7) * x;
}


float sinf(float x) {
    if (x > 0.10546875) {
        return cos_pol_r(0.25 - x);
    }
    if (x < -0.10546875) {
        return -cos_pol_r(0.25 + x);
    }
    return sin_pol_r(x);
}


float computed_value(float x) {
    x -= (int)x;
    if (x < 0.0f)
        x += 1.0f;

    if (x >= 0.75f)
        return  sinf(x - 1.0f);
    else {
        if (x >= 0.25f)
            return -sinf(x - 0.5f);
        else
            return  sinf(x);
    }
}
« Last Edit: June 20, 2024, 09:37:21 am by Picuino »
 

Online voltsandjolts

  • Supporter
  • ****
  • Posts: 2395
  • Country: gb
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #48 on: June 20, 2024, 09:55:45 am »
I, too, would be interested in the procedure you used to obtain these, Picuino!

Also, care to add an explicit // SPDX-License-Identifier: identifier to your code, too, just to be absolutely clear?  The CC0-1.0 I use is basically a copyright waiver, dedicating the code to the public domain.  You don't have to pick that if you don't want to, but I think for this kind of stuff, we should.
I believe that such good results are also partly a matter of luck. What I like most about the method I used is that you can choose the precision you want to achieve with the desired number of multiplications (more or less depending on the desired computational load).

I have used a simple polynomial interpolation of the sine function and the cosine function around the zero point. The points chosen to perform the interpolation are the Chebyshev coefficients. These are obtained by dividing a semicircle into equispaced parts and then calculating the cosine of these angles.

I have implemented it with a Python program that I posted in reply 17.
https://www.eevblog.com/forum/microcontrollers/lightweight-sin(2x)-cos(2x)-for-mcus-with-float-hardware/msg5546027/#msg5546027
I divided the two series (sine and cosine) adjusting them by hand until I found that at the limit x < 27/256 = 0.10546875 the two series had similar errors.

For the license, I don't like CC0 because it doesn't force derivative works to be freely licensed as well. I prefer a license that is free and that forces derivative works to be free as well. For example, the MIT license is simple, not very restrictive and maintains that requirement, besides being more appropriate for programs.

Code: [Select]

"""
   Copyright 2024 Picuino

   Permission is hereby granted, free of charge, to any person
   obtaining a copy of this software and associated documentation
   files (the "Software"), to deal in the Software without
   restriction, including without limitation the rights to use,
   copy, modify, merge, publish, distribute, sublicense, and/or
   sell copies of the Software, and to permit persons to whom the
   Software is furnished to do so, subject to the following
   conditions:

   The above copyright notice and this permission notice shall be
   included in all copies or substantial portions of the Software.

   THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
   EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
   OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
   NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
   HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
   WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
   FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
   OTHER DEALINGS IN THE SOFTWARE.
"""

import numpy as np
import matplotlib.pyplot as plt


def main():
    nodes = 9
    xmax = 27/256
    calc_sin(nodes, xmax)
    calc_cos(nodes, 0.25-xmax)
   

def calc_cos(nodes, xmax):
    x_nodes = cheby_nodes(-xmax, +xmax, nodes)
    y_nodes = cos(x_nodes)

    pol = np.polyfit(x_nodes, y_nodes, nodes-1)
    for i in range(1, nodes, 2):
        pol[i] = 0
    print()
    for i in range(0, nodes, 2):
        print(f"   float p{i} = {pol[i]:+11.8e};")

    xx = np.linspace(-xmax, xmax, 500)
    yy = np.polyval(pol, xx) - cos(xx)
   
    plt.plot(xx, yy)
    plt.xlabel('cos(x)')
    plt.show()


def calc_sin(nodes, xmax):
    x_nodes = cheby_nodes(-xmax, +xmax, nodes)
    y_nodes = sin(x_nodes)

    pol = np.polyfit(x_nodes, y_nodes, nodes-1)
    for i in range(0, nodes, 2):
        pol[i] = 0
    print()
    for i in range(1, nodes, 2):
        print(f"   float p{i} = {pol[i]:+11.8e};")

    xx = np.linspace(-xmax, xmax, 500)
    yy = np.polyval(pol, xx) - sin(xx)
   
    plt.plot(xx, yy)
    plt.xlabel('sin(x)')
    plt.show()


def cos(x):
    return np.cos(x * 2 * np.pi)

def sin(x):
    return np.sin(x * 2 * np.pi)


def cheby_nodes(a, b, nodes):
    i = np.array(range(nodes))
    x = np.cos((2 * i + 1) * np.pi / (2 * nodes))
    return 0.5 * (b - a) * x + 0.5 * (b + a)


main()


Code: [Select]
/*
   Copyright 2024 Picuino

   Permission is hereby granted, free of charge, to any person
   obtaining a copy of this software and associated documentation
   files (the "Software"), to deal in the Software without
   restriction, including without limitation the rights to use,
   copy, modify, merge, publish, distribute, sublicense, and/or
   sell copies of the Software, and to permit persons to whom the
   Software is furnished to do so, subject to the following
   conditions:

   The above copyright notice and this permission notice shall be
   included in all copies or substantial portions of the Software.

   THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
   EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
   OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
   NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
   HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
   WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
   FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
   OTHER DEALINGS IN THE SOFTWARE.
*/

#include <math.h>

float expected_value(float x) {
    x -= (int)x;

    if (x == -1.0f || x == -0.5f || x == 0.0f || x == 0.5f || x == 1.0f)
        return 0.0f;
    else
    if (x == -0.75f || x == 0.25f)
        return 1.0f;
    else
    if (x == -0.25f || x == 0.75f)
        return -1.0f;
    else {
        return (float)sin((double)x * 6.283185307179586);
    }
}


float cos_pol_r(float x) {
   float p0 = +5.90141572e+01;
   float p2 = -8.54375962e+01;
   float p4 = +6.49392826e+01;
   float p6 = -1.97392086e+01;
   float p8 = +1.00000000e+00;
   float xx = x * x;
   return (((p0 * xx + p2) * xx + p4) * xx + p6) * xx + p8;
}


float sin_pol_r(float x) {
   float p1 = -7.56594865e+01;
   float p3 = +8.15965361e+01;
   float p5 = -4.13416753e+01;
   float p7 = +6.28318528e+00;
   float xx = x * x;
   return (((p1 * xx + p3) * xx + p5) * xx + p7) * x;
}


float sinf(float x) {
    if (x > 0.10546875) {
        return cos_pol_r(0.25 - x);
    }
    if (x < -0.10546875) {
        return -cos_pol_r(0.25 + x);
    }
    return sin_pol_r(x);
}


float computed_value(float x) {
    x -= (int)x;
    if (x < 0.0f)
        x += 1.0f;

    if (x >= 0.75f)
        return  sinf(x - 1.0f);
    else {
        if (x >= 0.25f)
            return -sinf(x - 0.5f);
        else
            return  sinf(x);
    }
}

"For the license, I don't like CC0 because it doesn't force derivative works to be freely licensed as well. I prefer a license that is free and that forces derivative works to be free as well."

No. MIT doesn't force downstream to publish source code.
https://pitt.libguides.com/openlicensing/MIT
 
The following users thanked this post: newbrain

Offline Picuino

  • Super Contributor
  • ***
  • Posts: 1032
  • Country: es
    • Picuino web
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #49 on: June 20, 2024, 09:56:32 am »
Yes, I was wrong. Actually the MIT license is not very restrictive and allows the code to be used in proprietary programs. The GPL license would be more appropriate to ensure that it is free, but I think it is too much for a simple script. It is a discussion that would merit a thread of its own to discuss in the forum what would be most appropriate.
« Last Edit: June 20, 2024, 09:59:11 am by Picuino »
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #50 on: June 20, 2024, 10:29:54 am »
Why do you state that with exponents greater tha ±1 the other approach is wrong?
Because of the implicit MSB mantissa bit.  If you examine the code in bench.c above, you'll see the lengths I go to handle it right.

However, your prodding made me think harder.  :-+

For positive floats, from +0 to +inf, inclusive, the storage representation of any two float values compares the same as the floats themselves. For negative values, we only need to do a subtraction to convert them, so that the storage representations as two's complement integers will compare the same way as the original floats, for all numeric floats –– all except NaNs, which we don't care at all anyway.

That is proof that incrementing/decrementing in ULPs covers the entire finite range; from zero to subnormals to normals up to and including +inf, and the same for negative values.

Thus, are obviously right, and I was wrong. :palm:

To be specific, we can calculate the difference in ULPs simply with
Code: [Select]
typedef union {
    uint32_t  u;
    int32_t  i;
    float     f;
} word32;

int32_t difference(const float value, const float compared_to) {
    word32  v = { .f = value };
    word32  c = { .f = compared_to };
    if (v.u & 0x80000000) v.u = 0x80000000 - v.u;
    if (c.u & 0x80000000) c.u = 0x80000000 - c.u;
    return v.i - c.i;
}
with the following caveats:
  • Bogus results if either is a NaN (when (.u & 0x7F800000) == 0x7F800000 && (.u & 0x007FFFFF) != 0x00000000)
  • If one is zero and the other is not, the difference is in the ULPs of the non-zero one
We even know that if one is negative and the other positive, the difference is still sensible: the number of unique representable float values separating the two.

Also, "number of unique float values between the calculated value and the correct value" is a much more understandable definition (than "difference in ULPs"), is fully valid, and requires no caveats other than NaNs not being valid values.
« Last Edit: June 20, 2024, 02:06:50 pm by Nominal Animal »
 

Offline AVI-crak

  • Regular Contributor
  • *
  • Posts: 133
  • Country: ru
    • Rtos
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #51 on: June 20, 2024, 01:24:02 pm »
https://www.desmos.com/calculator/8zaitzwevg
Speed is higher -> accuracy is lower.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: ca
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #52 on: June 20, 2024, 10:07:40 pm »
Interesting fact:

For approximately 94% of the numbers being tested, the following relationship holds true:

best_sin(2*pi*x) = 2*pi*x

where 2*pi*x is calculated using single precision floats and best_sin is precise sin (calculated in doubles and then rounded to singles).

It is also interesting that, if you calculate the entire "best_sin(2*pi*x)" expression in doubles, the percentage of matches falls to 63%. This is because imprecise pi presentation and rounding errors after multiplication.


 

Offline dmsc

  • Newbie
  • Posts: 5
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #53 on: June 21, 2024, 03:27:18 pm »
Hi!

Better to present a given approximation is to learn how to generate new ones.

Here, I use Sollya ( https://www.sollya.org/ ) , an open source package to compute polynomial approximations to functions.

First, we start with a simple approximation: We want to approximate sin(2π x) in the interval [0, 0.25], this is the first quadrant as we can simply use symmetries for the other quadrants.

Code: [Select]
> prec = 500;
The precision has been set to 500 bits.
> P = fpminimax(sin(2*pi*x), [| x, x^3, x^5, x^7 |], [|SG...|], [2^-30; 0.25], absolute);
> P;
x * (6.28316402435302734375 + x^2 * (-41.337139129638671875 + x^2 * (81.34066009521484375 + x^2 * (-70.99242401123046875))))
>

The parameters are: the function to apporximate, the powers we want to use, the precission of the coefficients ("SINGLE"), the interval and the error to minimize. We specify the interval at a low value instead of 0, as sin(x) is 0 at 0, and also we only specify odd powers as our sin() function is odd. Also, we set the precision to a big value, to get good stability in the calculations.

To verify that the given approximation uses single floating point values, we can use printexpansion, and to get the error we use dirtyinfnorm:

Code: [Select]
> printexpansion(P);
x * (0x401921f5c0000000 + x^2 * (0xc044ab2760000000 + x^2 * (0x405455cd60000000 + x^2 * 0xc051bf83e0000000)))
> dirtyinfnorm(P-sin(2*pi*x),[0,0.25]);
5.901410675865754274125938111579946370898526416744156813463401263314036742159040177459193842802082182498861473608250266942315012450527420167572139088982e-7
>

The error is given in the full precision, but it is about 5.9E-7. This is with 4 coefficients. One of the problems of the approximation is that it does gives the exact value at 0, but not at 0.25:
Code: [Select]
> P(0);
0
> P(0.25);
0.9999994118697941303253173828125
>

This is expected in this type of approximations, as the solver is minimizing the maximum absolute error in the whole interval. We can do better by fixing the polynomial so that the value at 0.25 is always 1.
This is done by solving:  P(x) = A*X + B*X^3 + C*X^5 + D*X^7 at 0.25, we have 1 = A/4 + B/64 + C/1024 + D/16384, this means that D = 16384 - 4096 A - 256 B - 16 C.
Replacing, our new polynomial is: P(x) = A*(X - 4096 X^7) + B*(X^3 - 256 X^7) + C*(X^5 - 16 X^7) + 16384 X^7.

In Sollya:
Code: [Select]
> P = fpminimax(sin(2*pi*x), [| x-4096*x^7, x^3-256*x^7, x^5-16*x^7 |], [|SG...|], [2^-30; 0.25], absolute, 16384*x^7);
> P;
x * (6.283161163330078125 + x^2 * (-41.336696624755859375 + x^2 * (81.32404327392578125 + x^2 * (-70.8184814453125))))
> P(0);
0
> P(0.25);
1
> dirtyinfnorm(P-sin(2*pi*x),[0,0.25]);
6.8045399284315485022936000384196978305609872188241776832077688014331844645120006948491241788562445606690675631779229909577714291447121361540476950202e-7
> printexpansion(P);
x * (0x401921f500000000 + x^2 * (0xc044ab18e0000000 + x^2 * (0x405454bd20000000 + x^2 * 0xc051b46200000000)))
>

So, we now have exact approximations at 0 and 0.25, but the error is a little higher at 6.8E-7.

Another problem of this approximation is that, to calculate the quadrant, we probably need to multiply by 4 at the start - so it would be better to use an interval from 0 to 1 instead of 0 to 0.25, approximating sin(π/2 x). Now, our equations are simpler:

Code: [Select]
> P = fpminimax(sin(pi/2*x), [| x-x^7, x^3-x^7, x^5-x^7 |], [|SG...|], [2^-30; 1], absolute, x^7);
> P;
x * (1.57079029083251953125 + x^2 * (-0.645885884761810302734375 + x^2 * (7.9418011009693145751953125e-2 + x^2 * (-4.322417080402374267578125e-3))))
> P(0);
0
> P(1);
1
> dirtyinfnorm(P-sin(pi/2*x),[0,1]);
6.8045399284315485022936000384196978305609872188241776832077688014331844645120006948491241788562445606690675631779229909577714291447121361540476950202e-7
>

As expected, the error is the same as above, we simply scaled the coefficients. And we can now check how the error decreases by adding one more coefficient:

Code: [Select]
> P = fpminimax(sin(pi/2*x), [| x-x^9, x^3-x^9, x^5-x^9, x^7-x^9 |], [|SG...|], [2^-30; 1], absolute, x^9);
> P;
x * (1.5707962512969970703125 + x^2 * (-0.6459629535675048828125 + x^2 * (7.9687185585498809814453125e-2 + x^2 * (-4.67061810195446014404296875e-3 + x^2 * 1.5013478696346282958984375e-4))))
> printexpansion(P);
x * (0x3ff921fb40000000 + x^2 * (0xbfe4abba80000000 + x^2 * (0x3fb4666120000000 + x^2 * (0xbf73217f80000000 + x^2 * 0x3f23adb000000000))))
> dirtyinfnorm(P-sin(pi/2*x),[0,1]);
7.948025357666819715730261204406953573391265381670061492705068182157447496636645892899473552919933216095619523713857258147870996919642403290705587558827e-9
>

So, the error is about 85 times smaller with one more coefficient - and this is as good as possible using single precision, as 1/2^24 is 6E-8.

Have Fun!
 

Offline Tation

  • Regular Contributor
  • *
  • Posts: 81
  • Country: pt
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #54 on: June 23, 2024, 09:49:46 am »
Here, I use Sollya ( https://www.sollya.org/ ) , an open source package to compute polynomial approximations to functions.

First, we start with a simple approximation: We want to approximate sin(2π x) in the interval [0, 0.25], this is the first quadrant as we can simply use symmetries for the other quadrants.

Code: [Select]
> prec = 500;
The precision has been set to 500 bits.
> P = fpminimax(sin(2*pi*x), [| x, x^3, x^5, x^7 |], [|SG...|], [2^-30; 0.25], absolute);
> P;
x * (6.28316402435302734375 + x^2 * (-41.337139129638671875 + x^2 * (81.34066009521484375 + x^2 * (-70.99242401123046875))))
>

The parameters are: the function to apporximate, the powers we want to use, the precission of the coefficients ("SINGLE"), the interval and the error to minimize. We specify the interval at a low value instead of 0, as sin(x) is 0 at 0, and also we only specify odd powers as our sin() function is odd. Also, we set the precision to a big value, to get good stability in the calculations.

To verify that the given approximation uses single floating point values, we can use printexpansion, and to get the error we use dirtyinfnorm:

Code: [Select]
> printexpansion(P);
x * (0x401921f5c0000000 + x^2 * (0xc044ab2760000000 + x^2 * (0x405455cd60000000 + x^2 * 0xc051bf83e0000000)))
> dirtyinfnorm(P-sin(2*pi*x),[0,0.25]);
5.901410675865754274125938111579946370898526416744156813463401263314036742159040177459193842802082182498861473608250266942315012450527420167572139088982e-7
>

The error is given in the full precision, but it is about 5.9E-7. This is with 4 coefficients.

Why `absolute`? An absolute error of ±5.9e-7 may be acceptable if the sine we're looking for is much greater than that, but if the argument is small, say 1-e5, computing its sine with error ±5.9e-7 is not that good. I (almost) always look for minimax relative error.

I tested Sollya time ago. It was interesting since it supossedly uses a different theoretical approach that takes into account the finite precission used to evaluate the resulting polynomial, thus producing the absolute optimal polynomial using, say, Binary32. Unfortunately I was not able (maybe my fault) to get better results with it than with bare python versions of the Remez algorithm and (in my case) the polynomials I got with Sollya were definitely not optimal.

« Last Edit: June 23, 2024, 12:22:03 pm by Tation »
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: ca
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #55 on: June 23, 2024, 01:53:41 pm »
Why `absolute`? An absolute error of ±5.9e-7 may be acceptable if the sine we're looking for is much greater than that, but if the argument is small, say 1-e5, computing its sine with error ±5.9e-7 is not that good. I (almost) always look for minimax relative error.

There's no reason to worry about such small values. You can add "if (x < min_value) return x" in front of any sine algorithm and it will eliminate the ULP errors for values below min_value. For single precision floats, the min_value is greater than 1e-5 (approximately 0.00044 for sin(x) or 0.00007 for sin(2*pi*x)). For all the small numbers (which is over 90% of all numbers in the test), the only source of errors is imprecision in pi (its representation in floats is 28 ppb bigger than the real value) and multiplication errors. If we used sin(x) instead of sin(2*pi*x), there would be zero ULP error in this area. The multiplication by 2*pi is the sole source of ULP errors for small values.
 

Offline dmsc

  • Newbie
  • Posts: 5
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #56 on: June 23, 2024, 06:32:23 pm »
Hi!
Here, I use Sollya ( https://www.sollya.org/ ) , an open source package to compute polynomial approximations to functions.

First, we start with a simple approximation: We want to approximate sin(2π x) in the interval [0, 0.25], this is the first quadrant as we can simply use symmetries for the other quadrants.

Code: [Select]
> prec = 500;
The precision has been set to 500 bits.
> P = fpminimax(sin(2*pi*x), [| x, x^3, x^5, x^7 |], [|SG...|], [2^-30; 0.25], absolute);
> P;
x * (6.28316402435302734375 + x^2 * (-41.337139129638671875 + x^2 * (81.34066009521484375 + x^2 * (-70.99242401123046875))))
>

The parameters are: the function to apporximate, the powers we want to use, the precission of the coefficients ("SINGLE"), the interval and the error to minimize. We specify the interval at a low value instead of 0, as sin(x) is 0 at 0, and also we only specify odd powers as our sin() function is odd. Also, we set the precision to a big value, to get good stability in the calculations.

To verify that the given approximation uses single floating point values, we can use printexpansion, and to get the error we use dirtyinfnorm:

Code: [Select]
> printexpansion(P);
x * (0x401921f5c0000000 + x^2 * (0xc044ab2760000000 + x^2 * (0x405455cd60000000 + x^2 * 0xc051bf83e0000000)))
> dirtyinfnorm(P-sin(2*pi*x),[0,0.25]);
5.901410675865754274125938111579946370898526416744156813463401263314036742159040177459193842802082182498861473608250266942315012450527420167572139088982e-7
>

The error is given in the full precision, but it is about 5.9E-7. This is with 4 coefficients.

Why `absolute`? An absolute error of ±5.9e-7 may be acceptable if the sine we're looking for is much greater than that, but if the argument is small, say 1-e5, computing its sine with error ±5.9e-7 is not that good. I (almost) always look for minimax relative error.

This is why the polynomial guarantees that P(0)=0, this ensures that the error goes to 0 indeed. And this zero in the sin(x) in the interval to approximate is the reason for minimizing using the absolute error, as the division by 0 causes instabilities.

Quote
I tested Sollya time ago. It was interesting since it supossedly uses a different theoretical approach that takes into account the finite precission used to evaluate the resulting polynomial, thus producing the absolute optimal polynomial using, say, Binary32. Unfortunately I was not able (maybe my fault) to get better results with it than with bare python versions of the Remez algorithm and (in my case) the polynomials I got with Sollya were definitely not optimal.

What I find great in Sollya is that you can give your own polynomial bases and a constrained part, this gives much more flexibility in searching your solution.

Have Fun!
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #57 on: June 23, 2024, 06:38:48 pm »
For float x, sin(x) == x for x = -0.00044363294728100299835205078125 .. +0.00044363294728100299835205078125, inclusive.  Because
$$\frac{d \, \sin(C x)}{d \, x} = C \cos(C x) \quad \text{ and } \quad \frac{d \, \cos(C x)}{d \, x} = -C \sin(C x)$$for other periods the equation is sin(C·x) == C·x.  Because of rounding, this is not generally true for x close to zero, unless C is a power of two.

If we accept either adjacent float value as correct (1 ULP error), then
  • C = 2*Pi: x = -0.0000733965352992527186870574951171875 .. +0.0000733965352992527186870574951171875.
  • C = Pi: x = -0.000146793070598505437374114990234375 .. +0.000146793070598505437374114990234375.
  • C = Pi/2: x = -0.00029358614119701087474822998046875 .. +0.00029358614119701087474822998046875.
where the magnitudes of the range endpoints are exact; 10087543·2-37, 10087543·2-36, and 10087543·2-35, respectively.
« Last Edit: June 23, 2024, 06:40:21 pm by Nominal Animal »
 

Offline bson

  • Supporter
  • ****
  • Posts: 2408
  • Country: us
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #58 on: June 24, 2024, 07:32:53 pm »
How do you manage to test all floating values between 0 and 1?

In C/C++, you can use nextafter() or nextafterf() to iterate over representable floating point numbers.
https://en.cppreference.com/w/c/numeric/math/nextafter
Or for small numbers with a 0 exponent, directly use FLT_EPSILON, DBL_EPSILON, LDBL_EPSILON.
Found in float.h
« Last Edit: June 24, 2024, 07:34:55 pm by bson »
 

Offline nimish

  • Regular Contributor
  • *
  • Posts: 185
  • Country: us
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #59 on: June 25, 2024, 01:22:35 am »
How do you manage to test all floating values between 0 and 1?

In C/C++, you can use nextafter() or nextafterf() to iterate over representable floating point numbers.
https://en.cppreference.com/w/c/numeric/math/nextafter
Or for small numbers with a 0 exponent, directly use FLT_EPSILON, DBL_EPSILON, LDBL_EPSILON.
Found in float.h

Why bother? Just iterate through all possible 32-bit integers and interpret them as floats.
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #60 on: June 25, 2024, 05:37:45 am »
0x00000000 = 0.0f, and
0x00800000 .. 0x3F800000 = all normal (non-subnormal) positive floats greater than zero, up to and including 1.0f.

As mentioned several times in this thread, in the 0.0f .. 1.0f range (excluding subnormals), there are only 1,056,964,610 you need to test.  This is exactly what the test bench I posted does.
 
The following users thanked this post: SiliconWizard

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: ca
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #61 on: June 25, 2024, 03:40:52 pm »
As mentioned several times in this thread, in the 0.0f .. 1.0f range (excluding subnormals), there are only 1,056,964,610 you need to test.  This is exactly what the test bench I posted does.

Yeap, only one billion of floats, 90% of which are uselessly small. For comparison, if the same 32 bits of memory are used as unsigned integers, you get 4 billion numbers evenly spaced in the interval, 100+ times better absolute precision, and less computational resources.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15192
  • Country: fr
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #62 on: June 25, 2024, 10:31:14 pm »
Use the best tool for the job - conventional FP is not always the answer.
In some applications, fixed-point will give you better overall precision.
For a "better" use of bits, while keeping the benefits of FP, one could consider Posits, although I personally don't know of any *commercial* CPU that implement them on a hardware level as of yet. You can always use software implementations (very slow).
 

Online radiolistener

  • Super Contributor
  • ***
  • Posts: 3928
  • Country: ua
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #63 on: June 26, 2024, 01:28:47 am »
yes, fixed point with the same bit size has better precision and better resolution for samples

float has unused and duplicated codes
« Last Edit: June 26, 2024, 01:33:33 am by radiolistener »
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #64 on: June 26, 2024, 12:22:24 pm »
IEEE 754 Binary32 float storage representations:

0x00000000 = +0.0f
0x00000000 .. 0x007FFFFF = positive subnormals, 2⁻¹⁴⁹ .. 8388607×2⁻¹⁴⁹, inclusive; < 2⁻¹²⁶
0x00800000 = 2⁻¹²⁶, smallest positive normal
:
0x3F800000 = +1.0f
0x4B800000 = +16777216.0f, largest integer in the consecutive range
0x4B800001 = +16777218.0f
0x7F7FFFFF = 16777215×2¹⁰⁴ < 2¹²⁸, largest positive value representable
0x7F80000 = +Infinity
0x7F80001 .. 0x7FFFFFFF = +NaN

0x80000000 = -0.0f
0x80000000 .. 0x807FFFFF = negative subnormals, -2⁻¹⁴⁹ .. -8388607×2⁻¹⁴⁹, inclusive; > -2⁻¹²⁶
0x80800000 = -2⁻¹²⁶, largest negative normal
:
0xBF800000 = -1.0f
0xCB800000 = -16777216.0f, smallest integer in the consecutive range
0xFF7FFFFF = -16777215×2¹⁰⁴ > -2¹²⁸, smallest positive value representable
0xFF800000 = -Infinity
0xFF800001 .. 0xFFFFFFFF = -NaN

Zero is the only repeated numeric value, and only because +0.0f and -0.0f have their own separate representations.

There is exactly one positive infinity and one negative infinity, but 8388607 positive NaNs and 8388607 negative NaNs, and thus 16777214 NaN representations.  Because any arithmetic operation with a NaN yields NaN (but retains the sign of the operation), you have millions of non-numeric "marker" values you can use.  For example, a stack-based single-precision floating-point calculator might use the NaNs to describe the operators (say positive NaNs), and functions (say negative NaNs), and thus only need a single stack of float values.

On architectures like x86, x86-64, any ARM with hardware floating-point support, any Linux architecture with hardware or software floating-point support, you can use the float representation examination tool I posted in reply #37 to explore and check these; I use it constantly when working with floats.

If you want to add a check for architecture support to it, I'd add
    if (sizeof (float) != sizeof (uint32_t) ||
        ((word32){ .u = 0x00000000 }).f != ((word32){ .u = 0x80000000 }).f ||
        ((word32){ .u = 0xC0490FDB }).f != 3.1415927410125732421875f ||
        ((word32){ .u = 0x4B3C614E }).f != 12345678.0f) {
        fprintf(stderr, "Unsupported 'float' type: not IEEE 754 Binary32.\n");
        exit(EXIT_FAILURE);
    }
to the beginning of main().  I do not normally bother, because I haven't had access to a system where that would trigger, in decades.  That includes all my SBCs and microcontrollers.  (Notably, I do not have any DSPs, which are the exception, and actually could still use a non-IEEE 754 Binary32 float type.)

The function in reply #50 yields the difference between any two non-NaN float values, in number of unique non-NaN float representations between the two.  If they are the same value, it returns 0; if they are consecutive representable numeric float values, it returns 1; if there is one representable numeric float between the two values, it returns 2, and so on.  For example, for the difference between the smallest positive subnormal and the largest negative subnormal (storage representations 0x00000001 and 0x80000001) it returns 3.

The key idea in that difference is that negative float values representation is subtracted from 0x80000000, which turns their storage representation to the integer negative of the storage representation of the corresponding positive float value.  Note that this also maps -0.0f to the same integer representation as +0.0f does, 0x00000000.  The difference of such modified signed integer storage representations is the difference in number of representable float values as described in the previous paragraph.

For radix sorting, the storage representation is kept as a 32-bit unsigned integer.  For representations that have the sign bit clear, you set the most significant bit.  For all other representations, you invert all bits (~).  This keeps the representations of +0.0f and -0.0f separate (0x80000000 and 0x7FFFFFFF, respectively), puts the positive floats representation above the negative ones, and inverts the order of the representations of the negative ones, essentially ensuring that all non-NaN float values order the same way as their unsigned 32-bit modified storage representations.
After radix sorting, the inverse operation needs to invert all bits (~) if the most significant bit is clear, and only clear the most significant bit if it is set.  This undoes the storage representation modification, restoring the exact original float representations.

I, too, use fixed point types quite often.  However, this thread is about the specific case when you have hardware float support, and want to leverage that instead.  So, commenting that one should use fixed-point here instead of hardfloat is an apples-to-oranges replacement suggestion.
 

Offline nimish

  • Regular Contributor
  • *
  • Posts: 185
  • Country: us
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #65 on: June 26, 2024, 08:10:57 pm »
0x00000000 = 0.0f, and
0x00800000 .. 0x3F800000 = all normal (non-subnormal) positive floats greater than zero, up to and including 1.0f.

As mentioned several times in this thread, in the 0.0f .. 1.0f range (excluding subnormals), there are only 1,056,964,610 you need to test.  This is exactly what the test bench I posted does.

The difference between 1B and 4B is meaningless plus it's useful to see what happens with various NaNs/special bit patterns.

Anyway it's a moot point.

Also "uselessly small" is use-case dependent.
 

Offline bson

  • Supporter
  • ****
  • Posts: 2408
  • Country: us
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #66 on: June 27, 2024, 10:05:23 pm »
Fixed-point obviously has better precision, but it has terrible range and doesn't do proper arithmetic rounding.
Fixed-point is also always available, and I don't particularly object to using it in an interrupt handler - whereas saving and restoring FP state on interrupt makes me cringe (even if it's irrational).
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #67 on: June 28, 2024, 05:08:20 pm »
proper arithmetic rounding
Because fixed-point addition and subtraction are their integer equivalents, rounding is only applied for multiplication and division (and conversion to and from other formats like strings).

For rounding halfway away from zero –– which I personally prefer –– only requires the most significant dropped bit (of the absolute value), and is added to the absolute value of the product.  This breaks ties always away from zero.

Always truncating operations, rounding all operations towards zero, is sometimes preferred, however.  For addition, subtraction, multiplication, and square root, you know your result is either the closest representable value to the exact result, or one less than that.  If you have a way to check, you can always produce the exact result to within the precision available!  It is particularly useful for exact square and cubic roots.  When the operations use any other rounding method, you have at minimum two other possibilities for the exact correct result, so checking becomes at least twice as expensive.

On hardware that supports it, using C99 fesetround(FE_TOWARDZERO) via <fenv.h> can be very, very useful because of that.  Do not ignore the value of always rounding results towards zero!
 
The following users thanked this post: SiliconWizard

Offline Tation

  • Regular Contributor
  • *
  • Posts: 81
  • Country: pt
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #68 on: June 29, 2024, 05:20:32 pm »
May be of interest to somebody: https://galileo.phys.virginia.edu/classes/551.jvn.fall01/goldberg.pdf

On page 8 justifies ULP as a measure of error and provides a formula to compute it. The formula can give error values as non-integer ULPs.
 

Offline Nominal AnimalTopic starter

  • Super Contributor
  • ***
  • Posts: 6788
  • Country: fi
    • My home page and email address
Re: Lightweight sin(2·π·x), cos(2·π·x) for MCUs with float hardware
« Reply #69 on: June 29, 2024, 06:21:30 pm »
It also explains why the preferred method of breaking ties when rounding is to round exact halfway to even: it stops the slow drift in repeated operations that is possible if rounding halfway up.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf