Author Topic: Unum and Posit floating point numbers - any experience?  (Read 1414 times)

0 Members and 1 Guest are viewing this topic.

Offline iMoTopic starter

  • Super Contributor
  • ***
  • Posts: 4790
  • Country: pm
  • It's important to try new things..
Unum and Posit floating point numbers - any experience?
« on: November 07, 2022, 04:11:51 pm »
Do you have any practical experience with the "new floating point format" - Unum or Posit?

UNUM POSIT
« Last Edit: November 07, 2022, 04:14:34 pm by imo »
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: Unum and Posit floating point numbers - any experience?
« Reply #1 on: November 07, 2022, 06:17:43 pm »
Only because Julia - internally - uses it  :-//
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline iMoTopic starter

  • Super Contributor
  • ***
  • Posts: 4790
  • Country: pm
  • It's important to try new things..
Re: Unum and Posit floating point numbers - any experience?
« Reply #2 on: November 07, 2022, 06:25:21 pm »
There are some RISC-V FPUs experiments - like this one:

https://dl.acm.org/doi/fullHtml/10.1145/3446210
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14482
  • Country: fr
Re: Unum and Posit floating point numbers - any experience?
« Reply #3 on: November 07, 2022, 07:21:57 pm »
Got interested in those a while ago and did some experiments with libraries (didn't bother/take time to implement them myself.)

There's good and bad about them. From my (very limited) experiments, and from what I've otherwise read from more informed people, they don't really seem to be worth it in the general case. There may be particular applications for which they'll shine.

But most of all, the claim that they'd be more efficient/use fewer resources than IEEE FP has apparently not been proven true. A number of papers have shown the opposite, implementing them on FPGA or silicon.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: Unum and Posit floating point numbers - any experience?
« Reply #4 on: November 07, 2022, 09:33:56 pm »
All of the architectures (various ARM cores and x86-64) I use for serious number-crunching already have fast IEEE-754 floating-point support in hardware, so I'd have to implement unum math in software, and that would be an order of magnitude slower, or more.

I also like to use AVR on occasion, more just for fun than anything else, but that's not suitable for serious number crunching anyway.

Therefore, I consider them of theoretical interest, and of practical interest for FPGAs and processor developers but not software developers like myself.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4039
  • Country: nz
Re: Unum and Posit floating point numbers - any experience?
« Reply #5 on: November 08, 2022, 12:30:59 pm »
But most of all, the claim that they'd be more efficient/use fewer resources than IEEE FP has apparently not been proven true. A number of papers have shown the opposite, implementing them on FPGA or silicon.

Not sure what you mean by this.

A Posit FPU will clearly need a wider multiplier and bigger exponents than an IEEE FPU for the same data size. That's because Posit's give more precision for smallish numbers and much more range for big numbers.

An FPU for 32 bit Posits is bigger in area and power use than an FPU for 32 bit IEEE, but nowhere near the area or power use of an FPU for 64 bit IEEE.

Claims by Posit inventor John Gustafson that you can simply replace 64 bit IEEE floats by 32 bit Posits are clearly ridiculous. Yes, sure, in some cases where 32 bit IEEE is almost but not quite enough.  But not in the general case.

Posits have the same or more bits of precision than IEEE FP for smallish numbers. How small? With the usual parameters, up to the exact same point where IEEE can no longer represent every integer: 2^23 (8.38e6) for single precision and 2^53 (9e15) for double precision. And also for numbers between 1.0 and the reciprocals of those. And also for the corresponding negative numbers, obviously.

For numbers near to ±1 Posits have up to an extra decimal digit of precision than IEEE. That can really make the difference sometimes. As can not flushing abruptly to infinity at ±3.4e38 (especially) and ±1.8e308 and to denorms at (approximately) their reciprocals.


For a given size in RAM, Posits are slightly superior, no question in my mind. They need slightly larger FPUs to implement them, and no one is shipping such FPUs in real hardware just yet.
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: us
Re: Unum and Posit floating point numbers - any experience?
« Reply #6 on: November 08, 2022, 05:32:43 pm »
Claims by Posit inventor John Gustafson that you can simply replace 64 bit IEEE floats by 32 bit Posits are clearly ridiculous. Yes, sure, in some cases where 32 bit IEEE is almost but not quite enough.  But not in the general case.

This claim seems to be based on the assumption of using quires.  I haven't found an exact description of how they actually work, but the basic idea seems to be a high precision accumulator to allow you to do things like dot products with only one (significant) truncation at the end.  The idea being that 32 bits is enough for all of your coefficient arrays, but the reason people use float64 is that they want to avoid accumulation of truncation errors.  I have seen it described as an extension of FMA operations.

What I don't get is what that has to do with posits.  As far as I understand (again, without seeing a detailed description of how quires work) you could easily do exactly that today using float32 and float64, or in CPUs that support it, float128 for your accumulator, and I know that some applications do this.  For applications that don't, I don't know why they would be able to do this with posits.  Is some special behavior of posits that makes this easier somehow or more widely applicable?  Of course with current CPUs if they want to support float128 they would need a huge and expensive multiplier even if you don't use it.  Adding a wide, addition only data type could be more silicon efficient I guess, but again, that would be a relatively minor architectural change compared to switching the entire floating point representation.

To me the most interesting potential application is the 16 bit posit.  If that would work better than float16/bfloat16 for ML and graphics applications that would be interesting.  But as I understand bfloat16 actually has more dynamic range than posit16, at the expense of relatively low resolution everywhere, but for ML applications that seems to be a tradeoff ML people are happy to make.  At best one could argue that if everyone agreed on posit16 it would be good enough for both conventional applications that prefer the resolution of float16 and ML applications that prefer bfloat16, but it seems like that ship has sailed.
 

Offline iMoTopic starter

  • Super Contributor
  • ***
  • Posts: 4790
  • Country: pm
  • It's important to try new things..
Re: Unum and Posit floating point numbers - any experience?
« Reply #7 on: November 08, 2022, 07:53:51 pm »
Afaik, compared to 16bit float (+/-65500 range) the 16bit posit's range is +/- 268millions and also posit gives you one additional digit around 1.0 (that is good for ML).
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: us
Re: Unum and Posit floating point numbers - any experience?
« Reply #8 on: November 08, 2022, 08:13:45 pm »
Afaik, compared to 16bit float (+/-65500 range) the 16bit posit's range is +/- 268millions and also posit gives you one additional digit around 1.0 (that is good for ML).

Most ML with 16 bit words is using or moving to bfloat16 which has a maximum range of 10^38.  It has substantially less precision with only 8 bits for the fraction, but that seems to be enough for most applications.  They do use float32 as the accumulator.

And to restate Bruce's point a bit more explicitly: a GPU or ML accelerator designed exclusively for bfloat16 operation can use 8 bit multipliers.  One using posit16 must have 13 bit multipliers.  That requires substantially more silicon resources, so it *should* get more precision.  To be more explicit. essentially any piece of silicon operating on posits is going to expand it into a a fixed width fraction and exponent.  So the silicon requirements are similar to what you need for a fixed width representation that has the maximum exponent and fraction sizes that the posit can represent.  The variable precision format saves you on memory and bandwidth, but not really on hardware resources.  A hypothetical 20 bit floating point number with 1 sign bit, 6 exponent bits, and 13 fractional bits would have essentially the same hardware requirements as a posit 16, and would have greater or equal resolution everywhere.  It would cost 25% more bandwidth and storage.  It would also obviously not pack well into byte and word oriented data buses, but similar breakpoints between 16 and 32 will favor IEEE float vs. posit.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14482
  • Country: fr
Re: Unum and Posit floating point numbers - any experience?
« Reply #9 on: November 08, 2022, 09:31:35 pm »
But most of all, the claim that they'd be more efficient/use fewer resources than IEEE FP has apparently not been proven true. A number of papers have shown the opposite, implementing them on FPGA or silicon.

Not sure what you mean by this.

A Posit FPU will clearly need a wider multiplier and bigger exponents than an IEEE FPU for the same data size. That's because Posit's give more precision for smallish numbers and much more range for big numbers.

An FPU for 32 bit Posits is bigger in area and power use than an FPU for 32 bit IEEE, but nowhere near the area or power use of an FPU for 64 bit IEEE.

Claims by Posit inventor John Gustafson that you can simply replace 64 bit IEEE floats by 32 bit Posits are clearly ridiculous. Yes, sure, in some cases where 32 bit IEEE is almost but not quite enough.  But not in the general case.

That's precisely part of the claims I mentioned above. But the other argument was about not having to deal with all the particular cases that IEEE FP has to deal with, saving presumable a lot of logic.
But when you compare the actual implementation of posits to implementations of IEEE FP, uh. Just the software implementations seem much more complex.

So yes, the inventor has made extraordinary claims that were questionable, which doesn't bode well for the whole thing.
 

Offline ejeffrey

  • Super Contributor
  • ***
  • Posts: 3719
  • Country: us
Re: Unum and Posit floating point numbers - any experience?
« Reply #10 on: November 08, 2022, 11:11:48 pm »
The main reduction in complexity seems to be getting rid of denormals without having a resolution cliff near zero.  I'm not sure how much of an issue that really is in hardware FPU design?

It also gets rid of rounding modes that are almost never used.  That's fine, they are rarely used and many languages don't even provide capability to change them, but I could do without the authors pejorative comments "If your software uses these other rounding modes it's probably wrong and you should rewrite it".  The author has the same stance towards NaN, which while I sort of understand the position, but it is not very pragmatic if you actually want adoption.  The amount of software out there that expects NaNs to work mostly the way IEEE floats do is staggering, and rewriting it all is completely infeasible.  So claiming that it is a drop in replacement for IEEE floats is pretty much outright wrong.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4039
  • Country: nz
Re: Unum and Posit floating point numbers - any experience?
« Reply #11 on: November 09, 2022, 01:03:47 am »
In my experience the main use of other rounding modes is for just the occasional one single instruction in the middle of highly optimised hand-written code implementing things such as transcendental functions.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf