And rather curiously, even the "linear" part of a floating point number, the mantissa, is not actually linear.
For instance, suppose decimal numbers are to be represented to a precision of 2 significant figures. Then the range of values that can be represented to full two digit precision is from 1.0 x 10n to 9.9 x 10n.
However, a one digit change from 1.0 to 1.1 represents a difference of 10% [= (1.1 - 1.0) / 1.1], whereas a one digit change from 9.8 to 9.9 is only 1% [= (9.9 - 9.8 ) / 9.8]. The scale is quite non-linear.
Sure, you can add more digits, but the general non-linearity remains. For instance, 1.00000 to 1.00001 is 0.001%, whereas 9.99998 to 9.99999 is 0.0001%. It is still ten times more precise.
This is actually a good reason why computers should work in binary. For instance, going from 100000 to 100001 in binary is 3.1%, and going from 111110 to 111111 is 1.6%. The difference in representational precision only varies by a factor of 2 rather than a factor of 10. This means that binary values can cover the range of possible values for a given precision with a more even distribution.