When combing components, to meet a requested replacement value I'd like to know the correct way of determining whether the replacement value is within spec.
https://www.eevblog.com/forum/projects/how-about-combining-some-components/
The requested target is something like: 987,6 Ω ± 300 mΩ
The replacement value could be something like: 987,8 Ω ± 200 mΩ
Is there a standardized way of determining whether the latter is within spec, with ..٪ probability.
I've done some research myself, but I don't expect to get a solid answer using Google.
It would be great if someone knows of a method, better than taking the uncertainty bounds of both and check if the requested bounds are fully overlapping.
I've edited the uncertainty from 987,6 Ω ± 3 mΩ to 987,6 Ω ± 300 mΩ
and 987,8 Ω ± 2 mΩ to 987,8 Ω ± 200 mΩ
Thanks for responding!
I calculate uncertainties with:
https://www.isobudgets.com/combining-measurement-uncertainty/
Root of squares summed, in that case putting components in series doesn't need to be worst.
If mayor contributing values are close to one another they may improve the resulting uncertainty a bit.
The configuration which can do that and also get close to the target value might be the best choice. Hence the discussed metric.
btw, multiple variables in series (more than two) will never improve, but always worsen, with the variable with the smallest error as the (bare) minimum.
Just fill in the numbers and you will see why!
btw, multiple variables in series (more than two) will never improve, but always worsen, with the variable with the smallest error as the (bare) minimum.
Just fill in the numbers and you will see why!Well, yes the absolute error will increase if you put things in series, but then you can use smaller values (with generally smaller absolute uncertainty) if you put resistors in series vs parallel. If you put N resistors in series (and let's keep things simple and conservative by assuming a rectangular distribution that can be added and not worry about central limit theorems), the partial derivative of the series resistance with respect to R1 will be 1, so if there are 10 resistors with a value that follows a rectangular distribution between 990 Ohm and 1010 Ohm (1k +/- 1%), the worst-case total uncertainty will be 100 Ohm for a 10 kOhm resistor, aka a 1% 10 kOhm resistor. If you parallel 10 resistors, the partial derivative with respect to R1 will be 1/100 (for small enough deviations between resistors so the partial derivative is roughly constant), but you need 100 kOhm resisotrs (with a rectangular distribution from 99 kOhm to 101 kOhm) to get a total resistance of 10 kOhm. So the worst-case total uncertainty will be 10 * 1 kOhm / 100 = 100 Ohm with a mean of 10 kOhm, or again a 1% 10 kOhm resistor. It works the same if you calculate with normal independent distributions (the math gets easier if you use conductance instead of resistance in the parallel case). So where do you see the difference, other than parasitics like lead resistance and leakage?
B.t.w. when using the root sum squared method, is there a way to calculate the minimal relative uncertainty of the resulting value. When only the relative uncertainty (all equal) of the inputs is known (but not their values). I need to calculate/know that for each component configuration for some optimalisation, but haven't got a clue yet. (Will take 0, just to be safe)
B.t.w. when using the root sum squared method, is there a way to calculate the minimal relative uncertainty of the resulting value. When only the relative uncertainty (all equal) of the inputs is known (but not their values). I need to calculate/know that for each component configuration for some optimalisation, but haven't got a clue yet. (Will take 0, just to be safe)Well the point with relative error is that you always have to calculate it for each component (or variable), since it's always relative?
But this all depends in what you're looking for, sometimes one needs to know the absolute error instead.
Never bigger than ±0.001 ohm for example independent of the value itself.
Also little trick when reading a multimeter, since the error of an multimeter is relative (in percent).
Often means that the last digit won't be the same for lower or higher values on the same range!!
But my main point was that the results can never get better than the variable (resistor in this case) with the least amount of error, but always will be bigger, never smaller.
In your specific case of all resistors being equal, I guess you're right.
But my main point was that the results can never get better than the variable (resistor in this case) with the least amount of error, but always will be bigger, never smaller.That's true, but with components in parallel you need to start off with much larger values (and hence larger uncertainties). So while the uncertainties do get better as you parallel more resistors, the absolute uncertainty of the resistors started off much higher. If you are doing an academic exercise of arranging resistors, then a parallel combination gives you a lower absolute uncertainty. But if you are actually trying to achieve a particular resistance value, then I doubt it matters (maybe in some extreme cases?).
Another example, 1 kOhm + 2 kOhm + 3 kOhm + 4 kOhm = 10 kOhm, and if all resistors were 1%, the uncertainty will be 10 Ohm + 20 Ohm + 30 Ohm + 40 Ohm = 100 Ohm (dR/Rn=1). Now for a parallel combination of 21k // 42k // 62k // 81k Ohm, the total resistance is 10.01 kOhm, and the uncertainty is again 100 Ohm.In your specific case of all resistors being equal, I guess you're right.If my case where it doesn't matter is so specific, can you give an example where a series combination of resistors with the same relative tolerance yields a larger uncertainty than a parallel combination of resistors with the same tolerance that yields the same total value?
But my main point was that the results can never get better than the variable (resistor in this case) with the least amount of error, but always will be bigger, never smaller.That's true, but with components in parallel you need to start off with much larger values (and hence larger uncertainties). So while the uncertainties do get better as you parallel more resistors, the absolute uncertainty of the resistors started off much higher. If you are doing an academic exercise of arranging resistors, then a parallel combination gives you a lower absolute uncertainty. But if you are actually trying to achieve a particular resistance value, then I doubt it matters (maybe in some extreme cases?).
Another example, 1 kOhm + 2 kOhm + 3 kOhm + 4 kOhm = 10 kOhm, and if all resistors were 1%, the uncertainty will be 10 Ohm + 20 Ohm + 30 Ohm + 40 Ohm = 100 Ohm (dR/Rn=1). Now for a parallel combination of 21k // 42k // 62k // 81k Ohm, the total resistance is 10.01 kOhm, and the uncertainty is again 100 Ohm.In your specific case of all resistors being equal, I guess you're right.If my case where it doesn't matter is so specific, can you give an example where a series combination of resistors with the same relative tolerance yields a larger uncertainty than a parallel combination of resistors with the same tolerance that yields the same total value?
B.t.w. when using the root sum squared method, is there a way to calculate the minimal relative uncertainty of the resulting value. When only the relative uncertainty (all equal) of the inputs is known (but not their values). I need to calculate/know that for each component configuration for some optimalisation, but haven't got a clue yet. (Will take 0, just to be safe)Well the point with relative error is that you always have to calculate it for each component (or variable), since it's always relative?
But this all depends in what you're looking for, sometimes one needs to know the absolute error instead.
Never bigger than ±0.001 ohm for example independent of the value itself.
Also little trick when reading a multimeter, since the error of an multimeter is relative (in percent).
Often means that the last digit won't be the same for lower or higher values on the same range!!I think I've got a solution method to my root sum squared problem.
Using config 13 and the rule: when two component are the same, the relative uncertainty is lowest:
Take 100±1 (1%) for both A en B, resulting in 50±0.353553390593274.
Match C with that value, the combination A,B,C resulting in: 100±0.612372435695795
Match D with that value, the combination A,B,C,D resulting in: 50±0.293150984988965
Resulting in a relative uncertainty of: 0,58630196997793%
So the minimal relative uncertainty is 0,58630196997793 of the input uncertainty.
I could do that for every configuration and it will be fine I guess.
btw, all those many decimals is not needed (is that an EE thing or so?)
btw, all those many decimals is not needed (is that an EE thing or so?)No just copy 'n pasting..
P: 2x101±1 = 50.5±0.353553390593274
S: 50±0.5+0.5±0.005 = 50.5±0.500024999375031
Point is still that the error is not going down, always up or maybe stays the same.
Seen from the lowest error of all variables.
While this statement is true, it's useless when it's about finding combinations of resistors that produce a target value with the smallest uncertainty, because the you are not comparing resistors with the same value in series vs parallel. And as I showed above, it's getting the absolute uncertainty close is what gives you the lowest total uncertainty. This aligns with the intuition that you want to design a system so the error budget is evenly distributed instead of having one "weak link". Because the chance that this one weak link has an extreme value in the long tail of the distribution (say > two sigma from nominal) is much higher than 10 different variables all having a value in the long tail of their distribution.