Consider just resistors but this question applies to all components that are marked with tolerance data.
In the early days of solid state as we know it today, transistors were obviously made in large batches, then tested before a part # was assigned. That's why we had thousands of 2N#'s to deal with in he germanium days.
Now that I have access to a multiple decimal place DMM, after testing many hundreds resistors it almost seems like their markings are merely a guideline. Testing a batch of a few hundred identical really ancient carbon resistors have all changed significantly and in the same direction.
Does the marking indicate the testing, sorting or manufacturing accuracy, or perhaps all 3.
A resistor marked 1% upon testing will be clustered around the marked value in a Gaussian Curve. But what about it's tempco and long term aging effects? Are these caveat emptor only?
Really old wire wound resistors of the .1 and .05% class are testing close to the marked value.
If a person were to take an unmarked resistor and test it with the intention of labeling it, would the tolerance be stated as a statistical probability of the testing instrument and procedure, or would it reflect the accurate testing of a large number of samples?
Are .05% components "better" than 1% components or have they just been measured better?
And finally if one were to label a resistor that read 10.0010 Ohm, how would he label it and the tolerance.
Thanks
George Dowell