Author Topic: How to determine if a derived value or measurement is matching a specification  (Read 7673 times)

0 Members and 1 Guest are viewing this topic.

Offline HendriXMLTopic starter

  • Super Contributor
  • ***
  • Posts: 1085
  • Country: nl
    • KiCad-BOM-reporter
I changed the metric. The graph uses now actual tolerances which are half of the target one.

Some value is within spec, if this nonconformity metric is <= 1.

It would be nice if the combinations where ordered by exactly the same metric. But that would probably hurt performance a lot. Not only because of the nonconformity expression, but mostly because the combine expression would need to be calculated with propagated uncertanties. :-[

Currently only the value is optimized. With this metric, the combined uncertanty should also be (partly) optimized.

I could alter the simulation to only calculate fully when the combination is a candidate by value. But to deterimine that it must be preceded by many other full calculations. Also there will be targets where succeeding combinations keep getting better a very large proportion of the combinations, forcing many full calculations.

There're so many combo's that keeping the results in memory and do a second pass (starting with the best, ending when uncertainty cannot alter the optimal metric anymore), is not doable.
“I ‘d like to reincarnate as a dung beetle, ‘cause there’s nothing wrong with a shitty life, real misery comes from high expectations”
 

Offline HendriXMLTopic starter

  • Super Contributor
  • ***
  • Posts: 1085
  • Country: nl
    • KiCad-BOM-reporter
I thought I was done with the simulation stuff, but I think it might take less than 3x the time to optimize using the metric. So investigating the metric improvement might be worthwhile.

First a normal value optimization, with no uncertainty calculations.
Then with the optimal as a "candidate selector" the same calculation again, where potential metric winners will be fully calculated with uncertainties and metric. So no excessive usage of memory.

I added a simple example on how the same combination result can have different uncertainty/tolerance.

https://uncertaintycalculator.com/
« Last Edit: April 05, 2022, 09:15:39 pm by HendriXML »
“I ‘d like to reincarnate as a dung beetle, ‘cause there’s nothing wrong with a shitty life, real misery comes from high expectations”
 

Offline b_force

  • Super Contributor
  • ***
  • Posts: 1381
  • Country: 00
    • One World Concepts
When combing components, to meet a requested replacement value I'd like to know the correct way of determining whether the replacement value is within spec.
https://www.eevblog.com/forum/projects/how-about-combining-some-components/
The requested target is something like: 987,6 Ω ± 300 mΩ
The replacement value could be something like: 987,8 Ω ± 200 mΩ
Is there a standardized way of determining whether the latter is within spec, with ..٪ probability.
I've done some research myself, but I don't expect to get a solid answer using Google.

It would be great if someone knows of a method, better than taking the uncertainty bounds of both and check if the requested bounds are fully overlapping.

I've edited the uncertainty from 987,6 Ω ± 3 mΩ to 987,6 Ω ± 300 mΩ
and 987,8 Ω ± 2 mΩ to 987,8 Ω ± 200 mΩ
I had to read this over a couple of times, because I couldn't really follow.

But I can give an answer of using a couple of methods.

One is the theoretical way to get you in the right direction.
This is called relative error estimation based on partial derivatives.
Sometimes also called linear approximations with the chain rule I believe?
My apologies, I am a bit lost in translation, since I learned this stuff in my native language, see pic I found as an example.

Basically what you do, is you make an equation of the circuit in question, derive all the variables and take the absolute, multiply them with the expected error and sum all of these.
The downside of this method is that it expects that the error will be linear, in reality this isn't always true obviously. *

If we are talking about probabilities, one needs to use standard error as well as the standard deviation.
Although these two look very similar (and A LOT of people mix them up!), they are VERY different!

One will give you a number how precise you know the average, the other one will give a number of the spread across the samples.
Using 2 or 3 sigma is enough for most general electronics, in science they mostly use 4 or 5 sigma (or more)

That being said, in general paralleling resistors usually results in not changing the relative error.
While putting them in series results in doubling (or summing) them.

* Since the behavior and tolerance of most electronics is pretty tight (< 2%) and for devices like resistors pretty predictable, this isn't really an issue.
It will be a much bigger issue for other variables (non electronics) with a relative error around 20-30% or that kind of order of magnitude.
« Last Edit: April 13, 2022, 04:53:40 pm by b_force »
 

Offline b_force

  • Super Contributor
  • ***
  • Posts: 1381
  • Country: 00
    • One World Concepts
edit;

So if this is about the schematic shown in the first post with just some resistor networks, the equation is actually very easy to solve.

Since resistors in series will always get worse, the best possible network is just to parallel resistors at best.
Other configurations will always get worse by definition.
« Last Edit: April 13, 2022, 04:50:13 pm by b_force »
 

Offline HendriXMLTopic starter

  • Super Contributor
  • ***
  • Posts: 1085
  • Country: nl
    • KiCad-BOM-reporter
Thanks for responding!
I calculate uncertainties with:
https://www.isobudgets.com/combining-measurement-uncertainty/
Root of squares summed, in that case putting components in series doesn't need to be worst.
If mayor contributing values are close to one another they may improve the resulting uncertainty a bit.
The configuration which can do that and also get close to the target value might be the best choice. Hence the choosen metric.
« Last Edit: April 13, 2022, 06:39:41 pm by HendriXML »
“I ‘d like to reincarnate as a dung beetle, ‘cause there’s nothing wrong with a shitty life, real misery comes from high expectations”
 

Offline b_force

  • Super Contributor
  • ***
  • Posts: 1381
  • Country: 00
    • One World Concepts
Thanks for responding!
I calculate uncertainties with:
https://www.isobudgets.com/combining-measurement-uncertainty/
Root of squares summed, in that case putting components in series doesn't need to be worst.
If mayor contributing values are close to one another they may improve the resulting uncertainty a bit.
The configuration which can do that and also get close to the target value might be the best choice. Hence the discussed metric.
The root method is also something you can use.

It's just not as correct as the other method, especially when your equation will become a little more complex.
The GUM method assumes that variables don't influence each other, which is very debatable (and often unlikely)
I would never recommend using it for that reason, unless you have a strong point and certainty to believe that they won't

These days it's just very easy to do these things in WolframAlpha example with a voltage divider and a variable V(input)
So why bother?

https://www.wolframalpha.com/input?i=abs+derivative+V*R2%2F%28R1%2BR2%29+with+respect+to+R2

(copy-paste, link isn't fully working unfortunately)

You also have repeat this for R1 and V obviously.

If you use math programs like Maple or so, this can be done a little easier and quicker, or otherwise just copy it to Excel or something.

Btw, this is worst case way of calculating errors, there is also a more "probable" way of summing them.
Since it's very unlikely that all variables will be at their maximum error

In that case you combine the partial derivatives method like the GUM method, by taking the power of two of each term, summing them and taking the square root of the whole again.
Personally I prefer the worst case method, because that will give you an answer that you can more likely rely on.

btw, multiple variables in series (more than two) will never improve, but always worsen, with the variable with the smallest error as the (bare) minimum.
Just fill in the numbers and you will see why! :)
« Last Edit: April 13, 2022, 07:39:22 pm by b_force »
 

Offline HendriXMLTopic starter

  • Super Contributor
  • ***
  • Posts: 1085
  • Country: nl
    • KiCad-BOM-reporter
I should have said relative uncertainty..  :-[
Those can improve a bit. using root sum squared method.

Nice WolframAlpha link you posted, may become handy!

It's nice to know of another method of calculating uncertainties. I don't think I am going to switch to that way of calculating in my tool.
That tool uses datatapes which propagate the "standard" uncertainties. I took that as it seemed the scientific convention.
But it could theoretically support an extra "field" of worst case uncertainty in its quantity datatype as well.

But I know the cost of adding such functionality "system wide" so I'm not likely to go down that road. But it's good to know of the current limitations.

The initial purpose of propagating uncertainties wasn't "worst case analysis" at all, it was about how to display quantities with sensible sig figs.

Besides that it can still be used to have an indication of "uncertainty issues". I think you mean the same,  but using slopes (partial diffs) is in some case pretty tricky to calculate the effects of uncertainties. Especially when variables have large uncertainties. In that case both systems are likely not good enough.
« Last Edit: April 13, 2022, 09:32:15 pm by HendriXML »
“I ‘d like to reincarnate as a dung beetle, ‘cause there’s nothing wrong with a shitty life, real misery comes from high expectations”
 

Offline HendriXMLTopic starter

  • Super Contributor
  • ***
  • Posts: 1085
  • Country: nl
    • KiCad-BOM-reporter
B.t.w. when using the root sum squared method, is there a way to calculate the minimal relative uncertainty of the resulting value. When only the relative uncertainty (all equal) of the inputs is known (but not their values). I need to calculate/know that for each component configuration for some optimalisation, but haven't got a clue yet. (Will take 0, just to be safe)
“I ‘d like to reincarnate as a dung beetle, ‘cause there’s nothing wrong with a shitty life, real misery comes from high expectations”
 

Offline alm

  • Super Contributor
  • ***
  • Posts: 2898
  • Country: 00
btw, multiple variables in series (more than two) will never improve, but always worsen, with the variable with the smallest error as the (bare) minimum.
Just fill in the numbers and you will see why! :)
Well, yes the absolute error will increase if you put things in series, but then you can use smaller values (with generally smaller absolute uncertainty) if you put resistors in series vs parallel. If you put N resistors in series (and let's keep things simple and conservative by assuming a rectangular distribution that can be added and not worry about central limit theorems), the partial derivative of the series resistance with respect to R1 will be 1, so if there are 10 resistors with a value that follows a rectangular distribution between 990 Ohm and 1010 Ohm (1k +/- 1%), the worst-case total uncertainty will be 100 Ohm for a 10 kOhm resistor, aka a 1% 10 kOhm resistor. If you parallel 10 resistors, the partial derivative with respect to R1 will be 1/100 (for small enough deviations between resistors so the partial derivative is roughly constant), but you need 100 kOhm resisotrs (with a rectangular distribution from 99 kOhm to 101 kOhm) to get a total resistance of 10 kOhm. So the worst-case total uncertainty will be 10 * 1 kOhm / 100 = 100 Ohm with a mean of 10 kOhm, or again a 1% 10 kOhm resistor. It works the same if you calculate with normal independent distributions (the math gets easier if you use conductance instead of resistance in the parallel case). So where do you see the difference, other than parasitics like lead resistance and leakage?

Offline b_force

  • Super Contributor
  • ***
  • Posts: 1381
  • Country: 00
    • One World Concepts
btw, multiple variables in series (more than two) will never improve, but always worsen, with the variable with the smallest error as the (bare) minimum.
Just fill in the numbers and you will see why! :)
Well, yes the absolute error will increase if you put things in series, but then you can use smaller values (with generally smaller absolute uncertainty) if you put resistors in series vs parallel. If you put N resistors in series (and let's keep things simple and conservative by assuming a rectangular distribution that can be added and not worry about central limit theorems), the partial derivative of the series resistance with respect to R1 will be 1, so if there are 10 resistors with a value that follows a rectangular distribution between 990 Ohm and 1010 Ohm (1k +/- 1%), the worst-case total uncertainty will be 100 Ohm for a 10 kOhm resistor, aka a 1% 10 kOhm resistor. If you parallel 10 resistors, the partial derivative with respect to R1 will be 1/100 (for small enough deviations between resistors so the partial derivative is roughly constant), but you need 100 kOhm resisotrs (with a rectangular distribution from 99 kOhm to 101 kOhm) to get a total resistance of 10 kOhm. So the worst-case total uncertainty will be 10 * 1 kOhm / 100 = 100 Ohm with a mean of 10 kOhm, or again a 1% 10 kOhm resistor. It works the same if you calculate with normal independent distributions (the math gets easier if you use conductance instead of resistance in the parallel case). So where do you see the difference, other than parasitics like lead resistance and leakage?
Just make those partial derivatives as mentioned in post 27 and you will see, that's the best answer I can give.

But my main point was that the results can never get better than the variable (resistor in this case) with the least amount of error, but always will be bigger, never smaller.

In your specific case of all resistors being equal, I guess you're right.
With four resistors as an example, the total relative error is ΔRtot=4*ΔR=ΔR1+ΔR2+ΔR3+ΔR4 (since the derivative = 1 in all cases) and the total resistor is  also Rtot=4*R
Meaning that the total error in the end is the same, relatively speaking.
The absolute error is bigger of course yes.
« Last Edit: April 13, 2022, 09:53:36 pm by b_force »
 

Offline b_force

  • Super Contributor
  • ***
  • Posts: 1381
  • Country: 00
    • One World Concepts
B.t.w. when using the root sum squared method, is there a way to calculate the minimal relative uncertainty of the resulting value. When only the relative uncertainty (all equal) of the inputs is known (but not their values). I need to calculate/know that for each component configuration for some optimalisation, but haven't got a clue yet. (Will take 0, just to be safe)
Well the point with relative error is that you always have to calculate it for each component (or variable), since it's always relative?

But this all depends in what you're looking for, sometimes one needs to know the absolute error instead.
Never bigger than ±0.001 ohm for example independent of the value itself.

Also little trick when reading a multimeter, since the error of an multimeter is relative (in percent).
Often means that the last digit won't be the same for lower or higher values on the same range!!

Offline HendriXMLTopic starter

  • Super Contributor
  • ***
  • Posts: 1085
  • Country: nl
    • KiCad-BOM-reporter
B.t.w. when using the root sum squared method, is there a way to calculate the minimal relative uncertainty of the resulting value. When only the relative uncertainty (all equal) of the inputs is known (but not their values). I need to calculate/know that for each component configuration for some optimalisation, but haven't got a clue yet. (Will take 0, just to be safe)
Well the point with relative error is that you always have to calculate it for each component (or variable), since it's always relative?

But this all depends in what you're looking for, sometimes one needs to know the absolute error instead.
Never bigger than ±0.001 ohm for example independent of the value itself.

Also little trick when reading a multimeter, since the error of an multimeter is relative (in percent).
Often means that the last digit won't be the same for lower or higher values on the same range!!
I think I've got a solution method to my root sum squared problem.
Using config 13 and the rule: when two component are the same, the relative uncertainty is lowest:
Take 100±1 (1%) for both A en B, resulting in 50±0.353553390593274.
Match C with that value, the combination A,B,C resulting in: 100±0.612372435695795
Match D with that value, the combination A,B,C,D resulting in: 50±0.293150984988965

Resulting in a relative uncertainty of: 0,58630196997793%

So the minimal relative uncertainty is 0,58630196997793 of the input uncertainty.

I could do that for every configuration and it will be fine I guess.

“I ‘d like to reincarnate as a dung beetle, ‘cause there’s nothing wrong with a shitty life, real misery comes from high expectations”
 

Offline alm

  • Super Contributor
  • ***
  • Posts: 2898
  • Country: 00
But my main point was that the results can never get better than the variable (resistor in this case) with the least amount of error, but always will be bigger, never smaller.
That's true, but with components in parallel you need to start off with much larger values (and hence larger uncertainties). So while the uncertainties do get better as you parallel more resistors, the absolute uncertainty of the resistors started off much higher. If you are doing an academic exercise of arranging resistors, then a parallel combination gives you a lower absolute uncertainty. But if you are actually trying to achieve a particular resistance value, then I doubt it matters (maybe in some extreme cases?).

Another example, 1 kOhm + 2 kOhm + 3 kOhm + 4 kOhm = 10 kOhm, and if all resistors were 1%, the uncertainty will be 10 Ohm + 20 Ohm + 30 Ohm + 40 Ohm = 100 Ohm (dR/Rn=1). Now for a parallel combination of 21k // 42k // 62k // 81k Ohm, the total resistance is 10.01 kOhm, and the uncertainty is again 100 Ohm.

In your specific case of all resistors being equal, I guess you're right.
If my case where it doesn't matter is so specific, can you give an example where a series combination of resistors with the same relative tolerance yields a larger uncertainty than a parallel combination of resistors with the same tolerance that yields the same total value?

Offline b_force

  • Super Contributor
  • ***
  • Posts: 1381
  • Country: 00
    • One World Concepts
But my main point was that the results can never get better than the variable (resistor in this case) with the least amount of error, but always will be bigger, never smaller.
That's true, but with components in parallel you need to start off with much larger values (and hence larger uncertainties). So while the uncertainties do get better as you parallel more resistors, the absolute uncertainty of the resistors started off much higher. If you are doing an academic exercise of arranging resistors, then a parallel combination gives you a lower absolute uncertainty. But if you are actually trying to achieve a particular resistance value, then I doubt it matters (maybe in some extreme cases?).

Another example, 1 kOhm + 2 kOhm + 3 kOhm + 4 kOhm = 10 kOhm, and if all resistors were 1%, the uncertainty will be 10 Ohm + 20 Ohm + 30 Ohm + 40 Ohm = 100 Ohm (dR/Rn=1). Now for a parallel combination of 21k // 42k // 62k // 81k Ohm, the total resistance is 10.01 kOhm, and the uncertainty is again 100 Ohm.

In your specific case of all resistors being equal, I guess you're right.
If my case where it doesn't matter is so specific, can you give an example where a series combination of resistors with the same relative tolerance yields a larger uncertainty than a parallel combination of resistors with the same tolerance that yields the same total value?
To be honest, I don't know, I just fill in the numbers, well equations, and that's it  8) ;D

Maybe someone finds it a great exercise, lol.

Point is still that the error is not going down, always up or maybe stays the same.
Seen from the lowest error of all variables.

Offline HendriXMLTopic starter

  • Super Contributor
  • ***
  • Posts: 1085
  • Country: nl
    • KiCad-BOM-reporter
But my main point was that the results can never get better than the variable (resistor in this case) with the least amount of error, but always will be bigger, never smaller.
That's true, but with components in parallel you need to start off with much larger values (and hence larger uncertainties). So while the uncertainties do get better as you parallel more resistors, the absolute uncertainty of the resistors started off much higher. If you are doing an academic exercise of arranging resistors, then a parallel combination gives you a lower absolute uncertainty. But if you are actually trying to achieve a particular resistance value, then I doubt it matters (maybe in some extreme cases?).

Another example, 1 kOhm + 2 kOhm + 3 kOhm + 4 kOhm = 10 kOhm, and if all resistors were 1%, the uncertainty will be 10 Ohm + 20 Ohm + 30 Ohm + 40 Ohm = 100 Ohm (dR/Rn=1). Now for a parallel combination of 21k // 42k // 62k // 81k Ohm, the total resistance is 10.01 kOhm, and the uncertainty is again 100 Ohm.

In your specific case of all resistors being equal, I guess you're right.
If my case where it doesn't matter is so specific, can you give an example where a series combination of resistors with the same relative tolerance yields a larger uncertainty than a parallel combination of resistors with the same tolerance that yields the same total value?
Hmm I think I've misread the challenge...
but anyway:
Example:
P: 2x101±1 = 50.5±0.353553390593274
S: 50±0.5+0.5±0.005 = 50.5±0.500024999375031
(Been lazy: not 100% accurate uncertainty values, but close enough)

Hence the rule, if both values are the same (P or S) the relative uncertainty is minimal.
« Last Edit: April 13, 2022, 11:57:01 pm by HendriXML »
“I ‘d like to reincarnate as a dung beetle, ‘cause there’s nothing wrong with a shitty life, real misery comes from high expectations”
 

Offline b_force

  • Super Contributor
  • ***
  • Posts: 1381
  • Country: 00
    • One World Concepts
B.t.w. when using the root sum squared method, is there a way to calculate the minimal relative uncertainty of the resulting value. When only the relative uncertainty (all equal) of the inputs is known (but not their values). I need to calculate/know that for each component configuration for some optimalisation, but haven't got a clue yet. (Will take 0, just to be safe)
Well the point with relative error is that you always have to calculate it for each component (or variable), since it's always relative?

But this all depends in what you're looking for, sometimes one needs to know the absolute error instead.
Never bigger than ±0.001 ohm for example independent of the value itself.

Also little trick when reading a multimeter, since the error of an multimeter is relative (in percent).
Often means that the last digit won't be the same for lower or higher values on the same range!!
I think I've got a solution method to my root sum squared problem.
Using config 13 and the rule: when two component are the same, the relative uncertainty is lowest:
Take 100±1 (1%) for both A en B, resulting in 50±0.353553390593274.
Match C with that value, the combination A,B,C resulting in: 100±0.612372435695795
Match D with that value, the combination A,B,C,D resulting in: 50±0.293150984988965

Resulting in a relative uncertainty of: 0,58630196997793%

So the minimal relative uncertainty is 0,58630196997793 of the input uncertainty.

I could do that for every configuration and it will be fine I guess.
btw, all those many decimals is not needed (is that an EE thing or so?)
In fact, physically even non existing (only in theoretical math).

1% on 100 is ±1 at best, so all the answered technically can't be better than ±1 ohm.
So the 50 ohm can never be better than 1% worst case.

I will show you the other method, will take a while to get that done! :)

Offline HendriXMLTopic starter

  • Super Contributor
  • ***
  • Posts: 1085
  • Country: nl
    • KiCad-BOM-reporter
This one is better?
Example:
P: 100±1 | 100000±1000 = 99.9000999000999±0.998003495006367
S: 2*50±0.5 = 100±0.707106781186548
(Been lazy: not 100% accurate values, but close enough. Had to edit this one as well. It's getting late..)
« Last Edit: April 13, 2022, 11:47:04 pm by HendriXML »
“I ‘d like to reincarnate as a dung beetle, ‘cause there’s nothing wrong with a shitty life, real misery comes from high expectations”
 

Offline HendriXMLTopic starter

  • Super Contributor
  • ***
  • Posts: 1085
  • Country: nl
    • KiCad-BOM-reporter
btw, all those many decimals is not needed (is that an EE thing or so?)
No just copy 'n pasting.. ;D
“I ‘d like to reincarnate as a dung beetle, ‘cause there’s nothing wrong with a shitty life, real misery comes from high expectations”
 

Offline b_force

  • Super Contributor
  • ***
  • Posts: 1381
  • Country: 00
    • One World Concepts
btw, all those many decimals is not needed (is that an EE thing or so?)
No just copy 'n pasting.. ;D
Lol, no worries! :)

Here are the expressions btw  8)
Some weird formatting thanks to Libre office :( , so it's the whole abs expression squared.
Hopefully I didn't make a mistake or typo :(

Do you have all the values for Ra, Rb, Rc and Rd ?
« Last Edit: April 14, 2022, 12:47:26 am by b_force »
 

Offline b_force

  • Super Contributor
  • ***
  • Posts: 1381
  • Country: 00
    • One World Concepts
Well here the result  :o  :-DD

Assumed that the resistors are all equal to 100 ohm.
« Last Edit: April 14, 2022, 12:46:36 am by b_force »
 

Offline b_force

  • Super Contributor
  • ***
  • Posts: 1381
  • Country: 00
    • One World Concepts
But all jokes aside, watch what happens when we change the tolerance of the individual resistors;

So the contribution of Rd is the most significant, followed by Rc, than Ra and Rb :)

This is exactly the reason why you want to do these kind of calculations.
To see which variable is the most significant and needs to be the most accurate :)

One EXTREMELY important thing to mention, is that this WILL change drastically when values (and tolerances or errors) will be of different values!!!
This will change the entire overall picture totally.

So no general conclusions can be made!
« Last Edit: April 14, 2022, 12:53:48 am by b_force »
 

Offline arcnet

  • Newbie
  • Posts: 7
  • Country: de
What could also be used here is guard banding https://www.isobudgets.com/guard-banding-how-to-take-uncertainty-into-account/
"Ultimately, you use guard band methods to prevent the occurrence of false acceptance (Type I Error) and false rejection (Type II Error) errors."
 

Offline HendriXMLTopic starter

  • Super Contributor
  • ***
  • Posts: 1085
  • Country: nl
    • KiCad-BOM-reporter
In the metric I`ll be using the guard band is kind of represented by actual uncertainty / target uncertainty as an offset /addend. Its a nonconformity metric, so less is better.
https://www.eevblog.com/forum/metrology/how-to-determine-wether-a-value-or-measurement-is-matching-a-specification/msg4102789/#msg4102789
« Last Edit: April 15, 2022, 05:41:14 pm by HendriXML »
“I ‘d like to reincarnate as a dung beetle, ‘cause there’s nothing wrong with a shitty life, real misery comes from high expectations”
 

Offline alm

  • Super Contributor
  • ***
  • Posts: 2898
  • Country: 00
P: 2x101±1 = 50.5±0.353553390593274
S: 50±0.5+0.5±0.005 = 50.5±0.500024999375031

This difference has nothing to do with a parallel combination. If you compare a parallel combination using linear adding of uncertainties, you arrive at the exact same result (50.5 +/- 0.505). The difference only appears when you start assuming the central limit theorem applies, and calculate uncertainties as the root of the sum of squares. Because in the root of the sum of squares of two unequal value, the largest value counts much more (quadratically more  ;) ) than the smaller value. That's why paralleling a large value with a smaller value gives a lower uncertainty with this calculation. Try doing the same math for two resistors with the same value in parallel, but a resistor with a much smaller resistor in series. For example two 50 +/- 0.5 Ohm resistors in series (giving you 100 Ohm +/- 0.7 Ohm, versus a 101 +/- 1.01 and 10111 +/- 101.11 resistor in parallel (giving you 100 Ohm +/- 0.99 Ohm). So what helped is making sure both resistors contribute the same to the total uncertainty, not paralleling them.

Point is still that the error is not going down, always up or maybe stays the same.
Seen from the lowest error of all variables.

While this statement is true, it's useless when it's about finding combinations of resistors that produce a target value with the smallest uncertainty, because the you are not comparing resistors with the same value in series vs parallel. And as I showed above, it's getting the absolute uncertainty close is what gives you the lowest total uncertainty. This aligns with the intuition that you want to design a system so the error budget is evenly distributed instead of having one "weak link". Because the chance that this one weak link has an extreme value in the long tail of the distribution (say > two sigma from nominal) is much higher than 10 different variables all having a value in the long tail of their distribution.

Offline b_force

  • Super Contributor
  • ***
  • Posts: 1381
  • Country: 00
    • One World Concepts
While this statement is true, it's useless when it's about finding combinations of resistors that produce a target value with the smallest uncertainty, because the you are not comparing resistors with the same value in series vs parallel. And as I showed above, it's getting the absolute uncertainty close is what gives you the lowest total uncertainty. This aligns with the intuition that you want to design a system so the error budget is evenly distributed instead of having one "weak link". Because the chance that this one weak link has an extreme value in the long tail of the distribution (say > two sigma from nominal) is much higher than 10 different variables all having a value in the long tail of their distribution.
Well actually it does, one resistor will give the the smallest error.
Otherwise two, etc etc etc.

That being said, just putting multiple resistors in some kind of odd combination doesn't make any sense.
Unless you need a values that don't fit in any E-series.

As for statistically examples, I would go for worst case, not best case.
10 resistors might as well all have a systematic bias or drift in one direction.
Point is, you just never know unless you verify.
So sorry, but the idea of 10 resistors likely being better on average doesn't hold up I think.

Again I don't know what the whole idea is of this circuit, but even 0.1% is pretty cheap these days.


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf