Consider an instrument, that outputs some operator controlled parameter (V,I,Ohm,W,Hz etc). Two output resolution choices are available/feasible - either 40k/decade or 50k/decade.
For example, a voltage-source/power-supply with a 0-10V output range, could choose either of these control schemes;
0-10V/50k count, gives 0.2mV interval. last digit 0.2,0.4,0.6,0.8, 1 mV
0-10V/40k count, gives 0.25mV interval. last digits 0.25,0.5,0.75,1 mV.
The advantage of the first scheme is a (slightly) higher resolution. While the second approach achieves similar resolution, but also permits a precise half fraction/unit (ie. 0.5mV) to be cleanly expressed.
Which is preferred?