But most of all, the claim that they'd be more efficient/use fewer resources than IEEE FP has apparently not been proven true. A number of papers have shown the opposite, implementing them on FPGA or silicon.
Not sure what you mean by this.
A Posit FPU will clearly need a wider multiplier and bigger exponents than an IEEE FPU for the same data size. That's because Posit's give more precision for smallish numbers and much more range for big numbers.
An FPU for 32 bit Posits is bigger in area and power use than an FPU for 32 bit IEEE, but nowhere near the area or power use of an FPU for 64 bit IEEE.
Claims by Posit inventor John Gustafson that you can simply replace 64 bit IEEE floats by 32 bit Posits are clearly ridiculous. Yes, sure, in some cases where 32 bit IEEE is almost but not quite enough. But not in the general case.
Posits have the same or more bits of precision than IEEE FP for smallish numbers. How small? With the usual parameters, up to the exact same point where IEEE can no longer represent every integer: 2^23 (8.38e6) for single precision and 2^53 (9e15) for double precision. And also for numbers between 1.0 and the reciprocals of those. And also for the corresponding negative numbers, obviously.
For numbers near to ±1 Posits have up to an extra decimal digit of precision than IEEE. That can really make the difference sometimes. As can not flushing abruptly to infinity at ±3.4e38 (especially) and ±1.8e308 and to denorms at (approximately) their reciprocals.
For a given size in RAM, Posits are slightly superior, no question in my mind. They need slightly larger FPUs to implement them, and no one is shipping such FPUs in real hardware just yet.