Hi all,
To follow up a little, the reason I would like to know the specific process for this multimeter is that I have the option of using our calibration facilities at work to check, and if necessary, adjust (obviously at no cost, perhaps a beer or two). There is certainly no way I can do it otherwise and it would be futile to do so. However, in the meantime, I can certainly play around (in comparison to measurements made with my Fluke 179, the M 2037 is functionally absolutely fine on all ranges).
But, nonetheless, as I have found out, the meter contains fixed calibration factors from when it was manufactured (in 1991/2; it was since never calibrated and sat unused (with batteries in it) for the best part of twenty years - not with me, with someone else from whom I inherited it). Anyway, these factors can be reloaded in calibration mode, but doing so also deletes any previous calibration. The zero point is set by shorting the DC terminals together and setting that. Then, example here is the the DC voltage calibration, the meter checks that the calibration voltage that is applied to the terminals is within range. For the 300 mV range this calibration voltage is 290 mV. If you apply anything else, 280 mV or 300 mV, it will refuse to accept it. But once it is within range, it assumes that it is 290 mV and calculates and stores the new calibration factor. Which, if you want you can delete and revert to the fixed factory factors. The same philosophy applies to all other ranges, 2.9 Vdc 29 Vdc, 290 Vdc, 29 mA, 290 mA, and so on.
What led me to find this information? The service manual for the Metrawatt MA 5D and the assumption that calibration processes across similar models from a similar vintage may be similar, and in this case it was.
Thanks,
Chris