| Electronics > Projects, Designs, and Technical Stuff |
| Tolerance Analysis: Is Resistor TCR a Normal Distribution or Maximum? |
| (1/3) > >> |
| TimNJ:
Hi all, I'm working on preparing a tolerance analysis for a power supply. I'd like to estimate the spread of output voltages in production. The output resistor divider is one of the key circuit blocks which will determine the accuracy of the output voltage. I am attempting a root square sum (RSS) method. I was planning on using the following parameters: * Initial Tolerance * Shift after Soldering * Temperature Coefficient (TCR) We can assume the initial tolerance and shift after soldering is normally distributed (even if that might not be 100% the truth). But for TCR, is this reported as a typical value or worst case? If it's typical, then perhaps it can be considered a normal distribution? On the other hand, I would imagine the TCR for a particular material to be rather constant. That is: Perhaps the distribution is much narrower? Thanks in advance, Tim |
| coppercone2:
it will depend on the manufacturing process. The manufacturer has a spec to cover their ass and their production run is probably going to be more accurate then the tolerance so they have a smaller defect rate. They will also probobly want to give you stuff a bit better because of drift and warranty claims. All those resistors have some small chance of drastically changing in value (to a reasonable degree of course) because of internal strains and stuff, most likely. For good manufacturers, you buy 50ppm resistors (say yaego) you will get stuff like (+-) 5ppm, 14ppm, etc. You have some over head but when you make something you need to base its spec on the manufacturers values unless you test the resistors yourself. Trying to game unspecified curves is horse shit design unless you test. And even then, you have NO IDEA if maybe those resistors might have some tendency to go off 40ppm after a year after soldering despite initial 5ppm, which would bring em to 45ppm within spec. But say chinese cheapo resistors, you buy 50ppm its going to be like 40ppm, 50ppm.. very close to spec. No head room for unanticipated drift. If you sell measurement equipment you don't wanna have problems because your design probably works. Keep it simple because its more respectable. This kind of stuff is only useful for giving 'typical' values in your spec sheet. |
| RandallMcRee:
I have always assumed that it is typical. There is a lot of evidence in various threads in the metrology section that suggest it is actually "optimistic" rather than typical. My evidence is anecdotal, so hopefully, someone will reply with real production data. Resistors I have measured tend to go for the typical bucket, except for Vishay S102C which are in the optimistic bucket, e.g. usually worse than typical. Susumu RG were in the typical bucket, however. In any case, TCR is not, to my knowledge, ever a 100% tested parameter. So there is that. |
| coppercone2:
Well I am just saying you don't sell typical, you sell a spec and hope typical is a incentive for people to buy. Typical better then spec shows you know what your doing. Dave jones has a video. What he found that from a particular batch you get some offset within the tolerance with a bell curve. Giving people a probability density function of your equipment meeting a specification is unheard of. With individual testing its better but you still don't know if those resistors have some kind of manufacturing 'plot' behind them in regards to time, and the manufacturing process is not guaranteed by manufacturer not to change. This means more testing cost. If you do environmental testing and you do everything to spec you can decrease your sample size because you are more confident. And you can rest easier with your reputation, because then you are allowed to pass the buck to manufacturer of resistors if someone inquires about design and it looks solid. |
| coppercone2:
Not to say your analysis is not useful. You can get good data regarding 'typical' specification (hey this stuff is good and manufacturer is careful), and you can get good data to give a reasonable calibration interval based on confidence. Instead of slapping random calibration intervals on it. That stuff saves people money and time too, if they need to calibrate less often, or if they honestly know it needs to be frequently calibrated so they don't randomly decide every year when it needs to be 6 months. Realistically you want longer calibration intervals because higher stability is a angle a competitor will use against you, so it tell you that you need to improve your design or loosen your spec if you know its actually difficult to design something better). Unless its just awesome and they just need to deal with how finicky it is. |
| Navigation |
| Message Index |
| Next page |