see how the raw numbers correlate with the calibration constant.

That won't be necessary ! Just finished figuring out the gain constants, with the help of partial MAME emulation.

My cal dumper produces output like this for each range :

`entry # offset gain? range`

00 000368 3EDC5 30 mV DC

The offset is BCD, we knew this. I thought the gain was an actual hex value, but not quite - it's a weird mix of signed hex digits and decimal scaling (...) I'll try to explain.

1) ADC gives a raw, 8-digit packed BCD reading (4 bytes). (ex.: 1021520, 1 021 520). This is sign-extended into 5 bytes. I'm not sure the lowest digit is significant or always 0.

2) Add offset : pregain = ADC + cal_offset * 10 == ADC + 3680 == 1 025 200

2.5) not sure exactly why , but it seems we add 50 here. Maybe for rounding ? 1 025 250

3) then we need the gain factor "gain_const". In the cal dump, it's printed as "3EDC5" and must be interpreted this way :

` get digit_n as hex;`

if digit_n > 8, then factor_n = digit_n - 16

gain_const = 1 + (digit_1 / 100) + (digit_2 / 1000) + ... (digit_5 / 1E6);

So for the example above "3EDC5", we have

`gain_const = 1 + (3 / 100) + (-2 / 1000) + (-3 / 1E4) + (-4 / 1E5) + (5 / 1E6);`

gain_const = 1.027665

4) display_raw = pregain * gain_const , so 1 053 614. Well, simulation gives 1 053 610, but rounding / trunction, so yeah.

5) display = display_raw / 10 and/or adjust decimal point. I lost track of the number of trailing 0's !

And, that's pretty much all there is to it.