When you study how to perform uncertainty analysis on a measurement, you'll come across a guide that tells you that you can use the standard deviation of the mean of your readings as your Type A repeatability component. This is your calculated sample standard deviation divided by the square root of the number of readings you have made.
This has always confused me because sample time is cheap and so stating that performing excess sampling on your DUT can reduce Type A uncertainty to almost nil seems wrong. But then I realized something that doesn't seem to be common knowledge in the metrology community: your readings from a voltmeter are already means, so you don't get to calculate a standard deviation of the mean of the means. I *think* the guidance in the GUM is stating that your samples' standard deviation calculation is actually the standard deviation of the mean already.
I was, and still am, unsure what is meant by the phrase "standard deviation of the mean", so I did some calculations in an excel spreadsheet to try to clear things up for myself:
Suppose you had 10,000 samples of a voltage. You could calculate the mean and standard deviation. Or suppose you took 10 sets of 1,000 samples from the same data set. You could calculate the mean of each set and the standard deviation of those means. Or you could have 100 sets of 100 samples and do the same, or 10 sets of 1,000 samples, or 2 sets of 5,000 samples or whatever other combination you desire. The result is that your mean will remain relatively constant but your standard deviation will obviously diminish (because the standard deviation between averages of samples will be smaller than the standard deviation of the population of samples) and will close in on some number. This number happens to be the standard deviation of the mean. Meaning, you can calculate the standard deviation of the 10,000 samples and divide this by the square root of the number of samples and it would roughly be equal to calculating the standard deviation of 10 averages of sets of 1,000 samples of the same data set.
So my argument, and perhaps it is already common knowledge and just not in my circle, is that the guidance to use standard deviation of the mean in our uncertainty calculations is a bit misleading. It should state that our calculations of standard deviation of samples is really a standard deviation of the mean.
It is Saturday night and I'm making a post about this. What the hell have I done with my life.