In the case of multimeters it is the meters accuracy as a whole that is tested and not just the ADC. A lot of modern meters are then tweaked in software against a standard or known voltage, current or resistance etc. to improve their calibration. Old school they would have had trim pots in the circuitry to do the same thing.
Writing a meters %range and %reading also must take into account any non linearity of the overall meters design. In theory these figures are also a worst case figure on the meters produced so some will be well inside this spec and none outside.
Two of my meters below both 5 year old against a 5V Reference (4.99989 at 20 degrees this morning) meters turned on for about 5 minutes and the reference has been running for several days.
Victor 86B spec on DC 400mV/4V/40V/400V/1000V ±(0.5%+4)
So for a 5V nominal input this could be 5 +- 0.025V+0.04 or 5 +- 0.065 both meters are well within their spec.
Dropping the same across a badly constructed voltage divider (measured 0.99957Vdc) of Vishay 0.005% resistors
For this nominal 1V input spec would be 1+-.005+.004 or 1+-0.009
once again the meter is well within spec. This would then be repeated for AC, Resistance and all the other ranges.
To arrive at this 'assumption' however I am relying on my Agilent 34401A being perfectly calibrated which I have no real way of knowing as it lapsed 2 years ago. Due to it's accuracy and precision I have a high degree of 'certainty' that the figures I used from it are more than up to the task of calibrating my Victor meters due to their lower accuracy and precision. If it was in calibration I would have a reference to a known standard and could actually put a number to that 'certainty'.
There is a thread running in the metrology section on the forum about getting started with some links to good reading.
