So the micro-voltmeter was working quite well, and all seemed fine. But taking a second look at it, the output buffer is, well, non-optimum at best. It is instructive to go through why in some detail.
The adjustment has been omitted for simplicity. I shall just assume that it magically has appropriate values to give exactly 1 volt on the display, corresponding to 1 V input relative to the -2.5 V rail.

Now what happens if the positive regulator changes by 100ppm due to a 1°C in its 100ppm/°C output rating?
100ppm (0.01%) is 250 µV. This is scaled by a factor of -1 by R2/R1 giving 250 µV at the input the DVM module. That is 2.5 digits, which is ridiculously high. It needs to be 10x better than that at least. Then we have the tracking TC of R2 to R1 which is 200 ppm/°C (worst case), so that is bad as well.
Let’s try the same thing on the negative rail. Suppose the -2.5V rail rises by 250 µV. The output of falls by 250 µV x 100/240 = 104 µV. The overall change of input voltage (as seen by the DVM module) is 354 µV. The tracking TC of R2/R3 has the same scaling factor.
Whilst one could (theoretically) improve all the component TCs by a factor like 50x better, the circuit is fundamentally lousy.
I did consider using a -1V reference, hanging down from ground, and then powering the DVM module from a buffered version of this. But that means the DVM module has to run straight from the unregulated battery, and only works when the top battery is above 4 V. Not workable.
So the next version will need a x1 diff amp with 4 well matched resistors, and a 1 V regulator standing up from the -2.5 V rail.