Author Topic: Resilient Design Principles  (Read 10778 times)

0 Members and 1 Guest are viewing this topic.

Offline hammyTopic starter

  • Supporter
  • ****
  • Posts: 465
  • Country: 00
Resilient Design Principles
« on: January 17, 2017, 11:44:35 pm »
Hi

I'm looking for resilient design principles in electronics engineering. Is there something about this topic existing (books or whitepapers)?
I thought there must be a counterpart for the nowadays used "build-as-cheap-a-possible"-design principles.  :-//
I would very much appreciate any help or guidance.
Cheers
hammy
 

Online tautech

  • Super Contributor
  • ***
  • Posts: 29671
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: Resilient Design Principles
« Reply #1 on: January 17, 2017, 11:59:05 pm »
From a retired and wise Dr. of EE mate:

Never subject any mechanical or electrical component to more than 63% of ANY of it's maximum ratings.

If this is applied to ALL facets of a design, reliability and longevity is much enhanced.
« Last Edit: January 18, 2017, 12:04:00 am by tautech »
Avid Rabid Hobbyist.
Some stuff seen @ Siglent HQ cannot be shared.
 

Online CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5516
  • Country: us
Re: Resilient Design Principles
« Reply #2 on: January 18, 2017, 01:44:01 am »
From a retired and wise Dr. of EE mate:

Never subject any mechanical or electrical component to more than 63% of ANY of it's maximum ratings.

If this is applied to ALL facets of a design, reliability and longevity is much enhanced.

Sometimes you must, but this is great starting advice.  Violate it kicking and screaming, one part or component at a time.  Your reliability declines to the power of the number of violations.  Almost any reliability text will give good advice on this part of the problem.

Resilient design is different from (or at least in addition to) reliable design.  The advice here is to think.  Often.  Deeply.  Widely.  What can fail?  What if something is pushed outside its design limits.  What can you do to make the failure graceful, or not permanent?  What can users do?  Who knew that someone would use your widget while taking a shower.  I don't know of any good resources on the overall subject.  It would be hard to generate a general text because the solutions are so dependent on the device and its application.  The reliability books will describe FMECA, Failure Modes, Effects and Criticality Analysis which is at least a tool to help start the thought process.

Even though I am a EE I tend to think mechanically so my best example is a simple strain gauge force sensor.  You design it so the nominal loads don't exceed recommended strain limits for the sensor.  Then you put a solid mechanical stop so that when somebody tries to weigh their car on the bathroom scale it merely limits out, but isn't permanently bent or torn.  Of course this extreme will require other modifications to the overall bathroom scale, but the idea is there.
 

Offline Brumby

  • Supporter
  • ****
  • Posts: 12411
  • Country: au
Re: Resilient Design Principles
« Reply #3 on: January 18, 2017, 02:07:32 am »
The figure of 63% is one I personally haven't seen before, but OK.

Resilient design is different from (or at least in addition to) reliable design.  The advice here is to think.  Often.  Deeply.  Widely.  What can fail?  What if something is pushed outside its design limits.  What can you do to make the failure graceful, or not permanent?

I like this philosophy and have followed something very similar in my IT days.  The "graceful failure" is a key part.

One example would be one of those surgical robots.  You don't want a failure to result in a scalpel blade suddenly traversing 10cm within a patient on the table.  Another is in Aerospace where a failure can have drastic and possibly lethal consequences.
 

Offline DTJ

  • Super Contributor
  • ***
  • Posts: 1010
  • Country: au
Re: Resilient Design Principles
« Reply #4 on: January 18, 2017, 04:34:37 am »
 

Offline hammyTopic starter

  • Supporter
  • ****
  • Posts: 465
  • Country: 00
Re: Resilient Design Principles
« Reply #5 on: January 18, 2017, 10:58:02 am »
Really good advices, thank you!

@DTJ: I ordered that book, thank you!
@tautech: 63%, got it.

@CatalinaWOW @brumby
Quote
Resilient design is different from (or at least in addition to) reliable design.  The advice here is to think.  Often.  Deeply.  Widely.  What can fail?  What if something is pushed outside its design limits.  What can you do to make the failure graceful, or not permanent?

Are there some guides? Something like: "To prevent a hard failure here, build to resistors in parallel." or "If this visual alarm indicator, a LED, fails, build a error detection circuit in this way around this part of the circuit to trigger another alarm."

Some failures are like the example of this surgical robots. You have to think, thats right.
But other failures are more common. To much voltage across a resistor. ESD. Spikes. A wrong power supply connected. Errors like this.
And the solution is maybe something like: Use a diode. Clamp the voltage with a z-diode. Use a bridge rectifier.
Or "If a component like this fails, it behaves like this..."

The QA techniques like FMEA, pareto, Ishikawa are (as far as I know) all about already existing devices. The result of this QA principles is used to improve the design in individual steps. But if money and time does not matter, how can I design a circuit in the first time quite resilient? And how can I get this knowledge, without a coworker who already worked 30 years in aerospace or space technology?

For sure you cannot prevent anything. For sure you have to test and break a lot. But the principle I only find in books nowadays is "build cheap" or "buy expensive components to prevent likely failures" or "get a good manufacturing company with QA". I don't think this is the whole answer? Where is the knowledge to get most of the common stuff right the first time?

Somewhere in this forum (?) I heared a story about engineering before the 1970ies and after. It was said that the work of the engineers changed drastically. Before this time everything was designed to last "forever". After that the universities educated the engineerss to build stuff with the minimal cost and effort. Everything else is not efficient. This was a huge difference in the mindeset between these groups and they dont got along very good inside the companies. I don't know if this story is entirely true. But maybe some of the old greybeard engineers wrote a book about his "how to do it"-mindset?

Cheers
hammy

PS Maybe I'm asking for too much. Maybe this old knowledge is gone, or it is not applicable any more with modern circuits.  :-//
« Last Edit: January 18, 2017, 11:00:34 am by hammy »
 

Offline KhronX

  • Frequent Contributor
  • **
  • Posts: 345
  • Country: fi
    • Khron's Cave - Electronics Blog
Re: Resilient Design Principles
« Reply #6 on: January 18, 2017, 11:05:54 am »




Somewhere in this forum (?) I heared a story about engineering before the 1970ies and after. It was said that the work of the engineers changed drastically. Before this time everything was designed to last "forever". After that the universities educated the engineerss to build stuff with the minimal cost and effort. Everything else is not efficient. This was a huge difference in the mindeset between these groups and they dont got along very good inside the companies. I don't know if this story is entirely true. But maybe some of the old greybeard engineers wrote a book about his "how to do it"-mindset?

Cheers
hammy

PS Maybe I'm asking for too much. Maybe this old knowledge is gone, or it is not applicable any more with modern circuits.  :-//
Khron's Cave - Electronics - Audio - Teardowns - Mods - Repairs - Projects - Music - Rants - Shenanigans
 
The following users thanked this post: hammy, mtdoc

Offline hammyTopic starter

  • Supporter
  • ****
  • Posts: 465
  • Country: 00
Re: Resilient Design Principles
« Reply #7 on: January 18, 2017, 02:50:47 pm »
https://youtu.be/-1j0XDGIsUg?t=1769

Thank you! Around ~29:30 -> old engineers vs new engineers.
« Last Edit: January 18, 2017, 02:52:20 pm by hammy »
 
The following users thanked this post: nugglix

Offline Zbig

  • Frequent Contributor
  • **
  • Posts: 927
  • Country: pl
Re: Resilient Design Principles
« Reply #8 on: January 18, 2017, 03:19:57 pm »
From a retired and wise Dr. of EE mate:

Never subject any mechanical or electrical component to more than 63% of ANY of it's maximum ratings.

If this is applied to ALL facets of a design, reliability and longevity is much enhanced.

Let me know how's finding those 8Vmax digital parts to be "safely" powered with 5V going ;)
 

Offline KhronX

  • Frequent Contributor
  • **
  • Posts: 345
  • Country: fi
    • Khron's Cave - Electronics Blog
Re: Resilient Design Principles
« Reply #9 on: January 18, 2017, 03:35:52 pm »
More like (old) engineers vs. bean-counters...  :palm:

https://youtu.be/-1j0XDGIsUg?t=1769

Thank you! Around ~29:30 -> old engineers vs new engineers.
Khron's Cave - Electronics - Audio - Teardowns - Mods - Repairs - Projects - Music - Rants - Shenanigans
 

Offline dmills

  • Super Contributor
  • ***
  • Posts: 2093
  • Country: gb
Re: Resilient Design Principles
« Reply #10 on: January 18, 2017, 03:46:18 pm »
Be conservative in what you generate, liberal in what you accept, sage advice on IO interfaces and protocol implementations (And life).

ESD and RFI countermeasures should NOT be subject to "value engineering", and use single source and custom components only after kicking and screaming. Custom LCD (and especially OLED) panels looking at you...

Include circuit diagrams, and label them with expected voltage ranges wherever this makes sense.
Include protocol descriptions and register maps in the manuals, that way 30 years later someone can still hack a new processor onto the thing.

Include source code and build scripts.

The mechanical design is often as much of a factor as the electronic, MLCCS subject to board flex fail, some of the automotive standards are good on this, as is the NASA stuff.
 
Regards, Dan.
 

Online CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5516
  • Country: us
Re: Resilient Design Principles
« Reply #11 on: January 18, 2017, 05:22:36 pm »
Really good advices, thank you!

@DTJ: I ordered that book, thank you!
@tautech: 63%, got it.

The QA techniques like FMEA, pareto, Ishikawa are (as far as I know) all about already existing devices. The result of this QA principles is used to improve the design in individual steps. But if money and time does not matter, how can I design a circuit in the first time quite resilient? And how can I get this knowledge, without a coworker who already worked 30 years in aerospace or space technology?


FMECA can beneficially be used on new designs.  It is tedious, which makes it hard to remain thoughtful, which is the important part.  It is the process of going through each component of your design and thinking - what could go wrong with this part.  The non-thinking version of this often simplifies this to Short or Open which misses the point of using this methodology this way.  It does simplify the next stage of the criticality analysis.  What happens if the proposed failure happens.

Even simple examples get complex, but here are a couple.  Take a resistor used in a divider to set the output of a variable linear regulator.

What could happen?  Opens are obvious - either at manufacture or due to some life event.  Shorts can happen too, less likely at manufacture.  But also value shifts can occur.  Now what happens?  If the resistor is the upper leg of the divider the output goes all the way to the input value.  If this input value is high enough everything downstream fries.  So you might take defensive steps like designing the input stage to provide less voltage (after thinking through the possible negative consequences of that), or as you suggest, splitting this resistor into parallel paths.  Or other things entirely.  Value shifts may or may not have serious consequences - depends on how the regulated output is used.  Shorts in the upper leg turn the device off.  May be the graceful failure you are looking for, or may require mitigation such as higher wattage resistors to make shorting less likely.

Filter capacitors are another example.  In most cases they follow a resilient path, placing far more than absolutely required.  Failure of one or more is undetectable in device operation.  But since there is seldom any real analysis of the requirements there is danger that the risk of a short, which increases with each installation grows to exceed the risk of inadequate filtering performance.  Which is questionable practice because the consequences of a short usually far exceed the consequences of excess noise on the supply.

More complex components have many more possible changes.  Non-linearities develop, junctions soften, gains drop, thresholds change .... a whole list of things to think about.

It is the complexity and the variability of thought required to do this well that convinces me that good engineers will be employed far into the future.  Until AI reaches human level intelligence and flexibility those jobs will not be lost to automation.  That comforting thought does not apply to many who wear the title of engineer and only paste together designs from the application notes.
 
The following users thanked this post: hammy

Online CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5516
  • Country: us
Re: Resilient Design Principles
« Reply #12 on: January 18, 2017, 05:38:33 pm »




Somewhere in this forum (?) I heared a story about engineering before the 1970ies and after. It was said that the work of the engineers changed drastically. Before this time everything was designed to last "forever". After that the universities educated the engineerss to build stuff with the minimal cost and effort. Everything else is not efficient. This was a huge difference in the mindeset between these groups and they dont got along very good inside the companies. I don't know if this story is entirely true. But maybe some of the old greybeard engineers wrote a book about his "how to do it"-mindset?

Cheers
hammy

PS Maybe I'm asking for too much. Maybe this old knowledge is gone, or it is not applicable any more with modern circuits.  :-//

The shift is real, but it is not necessarily bad or evil.  I spent my career in aerospace where things are designed to last "forever" and under all conditions.  Twenty years was a typical design life, and products I helped design and produce in the 1970s have just left service in the last decade.

That sounds good to someone cursing the cell phone that died 18 months into a two year contract, but it has very real downsides.  It means that technology in that product can advance only very slowly.  Think of being forced to do signal processing without microprocessors, which just became available in the mid-1970s.  Or think of using a desktop computer using the CPUs of 20 or more years ago.  On a CRT display since LCDs were an expensive and unproven technology 20 years ago.

Technology has less obvious impacts.  The advances in knowledge across the board have allowed higher performance designs.  The old designs were based on materials that had properties that weren't too well controlled, and using design techniques that were tedious when applied in any detail.  The solution was to allow huge safety margins.  So things were heavy.  Big.  Awkward.  Maybe the tradeoff for robust isn't perfect.  How many of you choose to use one of the old boat anchor oscilloscopes or power supplies instead of the new lightweight versions?  Or would prefer to drive an old car that weighed 1500 kilos and got terrible gas mileage while having less power than current vehicles.

All these new tools have allowed engineers to design close to the edge on cost and reliability also.  And the market has rewarded that behavior.
 
The following users thanked this post: hammy, PlainName

Online tautech

  • Super Contributor
  • ***
  • Posts: 29671
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: Resilient Design Principles
« Reply #13 on: January 18, 2017, 06:21:36 pm »
Somewhere in this forum (?) I heared a story about engineering before the 1970ies and after. It was said that the work of the engineers changed drastically. Before this time everything was designed to last "forever". After that the universities educated the engineerss to build stuff with the minimal cost and effort. Everything else is not efficient. This was a huge difference in the mindeset between these groups and they dont got along very good inside the companies. I don't know if this story is entirely true. But maybe some of the old greybeard engineers wrote a book about his "how to do it"-mindset?
Based on the % I offered, that's exactly how they did it. My buddy worked for a couple of Govt departments in his younger years and all design was that way by department rules. Everything was built like the proverbial brick outhouse.  :)
In later years and with widespread use of semiconductors, standards eased some until a "balance" of reliability vs cost became popular and with many products today cost is the primary factor of design.
There are still high quality products, think Telco's and public transmission.
As consumer electronics started to venture into the power electronics field in the late 70's and 80's the use off the then new power devices where often used too close to their ratings and failures ensued. Now power devices are much more robust and reliable and of which the temps that some commonly operate at is  :o.

IMO for what most of us do a 75% maximum is a good figure to use today but as it's always been, conservative design lasts.
I'll still stick to my ~2/3.  ;)
Avid Rabid Hobbyist.
Some stuff seen @ Siglent HQ cannot be shared.
 
The following users thanked this post: hammy

Online edavid

  • Super Contributor
  • ***
  • Posts: 3451
  • Country: us
Re: Resilient Design Principles
« Reply #14 on: January 18, 2017, 06:33:32 pm »
Somewhere in this forum (?) I heared a story about engineering before the 1970ies and after. It was said that the work of the engineers changed drastically. Before this time everything was designed to last "forever". After that the universities educated the engineerss to build stuff with the minimal cost and effort.

Wait a second, when did universities ever educate engineers in anything to do with product design?  I have never heard of that.

And if you don't think there were crappy, unreliable, short-lived products before the 70s, I guess you don't know much about cars (or TV sets, or kitchen applicances, or ...).
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Resilient Design Principles
« Reply #15 on: January 18, 2017, 09:28:59 pm »
63% is an awfully arbitrary number.  Where does that comes from? 

But by asking this question, we can think more about the failure modes of particular materials.

Most components fail exponentially with operating conditions (temperature most importantly), so that a constant difference in temperature (not a percentage!) improves lifetime a proportional amount.  This is modeled with the Arrhenius equation.  The amount of temperature difference required to, say, double the lifetime, depends on activation energy, but it's usually around 10C.  Most failures associated with chemical breakdown (like the breakdown of plastics) follow this model.

Some components are subject to diffusion: whether it's a solvent trapped inside a barrier, or a barrier against contaminants outside, the barrier is permeable in either case.  Polarized capacitors fail in this way: aluminum electrolytics by release of electrolyte, solid polymers by ingress of moisture.  Diffusion follows a T^(3/2) law, so gets considerably worse at high temperatures (but not exponentially, at least until something else happens like the solvent boils or the seals fail).

Power goes as V^2, or I^2, or V*I at least.  Temperature usually goes proportional to power, so the temperature rise is halved by dropping V and I to 70% (i.e., to sqrt(2)/2).

Some materials paradoxically get better at high temperatures: ceramics and metals can be annealed to relieve defects.  Defects can be caused by high voltage, chemical exposure, radiation, etc.  This is a diffusion effect, so the material properties are usually impaired (such as ionic diffusion allowing current to flow through ceramics, or metals creeping under load).  But not always: superalloys used for jet engine turbine blades retain their strength nearly all the way up to their melting point!



So a lot of materials science, and familiarity with typical component specs, goes into making an accurate life estimate.



And that's just intrinsic component ratings, for all parameters neatly bounded: resistors and capacitors never exposed to surge voltages, transistors not exposed to surge currents or ESD, that sort of thing.

Real environments have a 1/f^2 distribution of transients: stupendously large surges are very rare (like direct lightning strike!), weak transients are common (ESD, EFT), and nominal operation is, well, nominal (like, 99% of the time spent within ratings?).  But if your voltage-to-lifetime function is exponential with voltage, those rare transients will completely dominate the lifetime of your system.



The best recommendation I can make, for operational as well as reliable design, is this: bound your inputs, bound your outputs.  Map the input and output ranges as closely as possible.

Example: an amplifier with a 0-5V output range (bounded by saturation to the supplies), with a gain of 5, needs only a 0-1V input.  You could clamp any signal below 0V or above 1.0V, and have no change in operation.  Which seems to suggest rather the opposite: if it has no effect, why bother adding it?  Ah, but if you consider surge inputs as well as nominal inputs, the reason becomes clear.  Clamping the input to 1V dissipates a hell of a lot less power during a 10A surge, than clamping it at 5V, or 50V does!

Bounding works for current as well as voltage.  If you simply add a clamp device to the input pin, then a high-impedance ESD surge will be clamped nicely, but a low-impedance surge (say from induced lightning on a long cable) will still blow it out (hundreds of amps?).  So you might add a resistor in series with the input, to limit current.  If it's impractical (because size or cost) to dissipate surges, then a replaceable fusible component can be used.  Assuming that replacing parts is part of acceptable operation.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: hammy

Offline hammyTopic starter

  • Supporter
  • ****
  • Posts: 465
  • Country: 00
Re: Resilient Design Principles
« Reply #16 on: January 18, 2017, 09:36:28 pm »
And if you don't think there were crappy, unreliable, short-lived products before the 70s?

Ok, point taken.

But the original question was about books and guides. The "good old time"-part was just for clarification and about the experience of greybeard engineers.
« Last Edit: January 19, 2017, 01:08:48 am by hammy »
 

Online CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5516
  • Country: us
Re: Resilient Design Principles
« Reply #17 on: January 18, 2017, 10:34:53 pm »
63% is an awfully arbitrary number.  Where does that comes from? 

But by asking this question, we can think more about the failure modes of particular materials.

Most components fail exponentially with operating conditions (temperature most importantly), so that a constant difference in temperature (not a percentage!) improves lifetime a proportional amount.  This is modeled with the Arrhenius equation.  The amount of temperature difference required to, say, double the lifetime, depends on activation energy, but it's usually around 10C.  Most failures associated with chemical breakdown (like the breakdown of plastics) follow this model.

Some components are subject to diffusion: whether it's a solvent trapped inside a barrier, or a barrier against contaminants outside, the barrier is permeable in either case.  Polarized capacitors fail in this way: aluminum electrolytics by release of electrolyte, solid polymers by ingress of moisture.  Diffusion follows a T^(3/2) law, so gets considerably worse at high temperatures (but not exponentially, at least until something else happens like the solvent boils or the seals fail).

Power goes as V^2, or I^2, or V*I at least.  Temperature usually goes proportional to power, so the temperature rise is halved by dropping V and I to 70% (i.e., to sqrt(2)/2).

Some materials paradoxically get better at high temperatures: ceramics and metals can be annealed to relieve defects.  Defects can be caused by high voltage, chemical exposure, radiation, etc.  This is a diffusion effect, so the material properties are usually impaired (such as ionic diffusion allowing current to flow through ceramics, or metals creeping under load).  But not always: superalloys used for jet engine turbine blades retain their strength nearly all the way up to their melting point!



So a lot of materials science, and familiarity with typical component specs, goes into making an accurate life estimate.



And that's just intrinsic component ratings, for all parameters neatly bounded: resistors and capacitors never exposed to surge voltages, transistors not exposed to surge currents or ESD, that sort of thing.

Real environments have a 1/f^2 distribution of transients: stupendously large surges are very rare (like direct lightning strike!), weak transients are common (ESD, EFT), and nominal operation is, well, nominal (like, 99% of the time spent within ratings?).  But if your voltage-to-lifetime function is exponential with voltage, those rare transients will completely dominate the lifetime of your system.



The best recommendation I can make, for operational as well as reliable design, is this: bound your inputs, bound your outputs.  Map the input and output ranges as closely as possible.

Example: an amplifier with a 0-5V output range (bounded by saturation to the supplies), with a gain of 5, needs only a 0-1V input.  You could clamp any signal below 0V or above 1.0V, and have no change in operation.  Which seems to suggest rather the opposite: if it has no effect, why bother adding it?  Ah, but if you consider surge inputs as well as nominal inputs, the reason becomes clear.  Clamping the input to 1V dissipates a hell of a lot less power during a 10A surge, than clamping it at 5V, or 50V does!

Bounding works for current as well as voltage.  If you simply add a clamp device to the input pin, then a high-impedance ESD surge will be clamped nicely, but a low-impedance surge (say from induced lightning on a long cable) will still blow it out (hundreds of amps?).  So you might add a resistor in series with the input, to limit current.  If it's impractical (because size or cost) to dissipate surges, then a replaceable fusible component can be used.  Assuming that replacing parts is part of acceptable operation.

Tim

63% is arbitrary, but the concept is sound, and is indirectly consistent with Arrhenius approaches.  Presumably the manufacturers have some statistical basis for their ratings.  They are saying that whatever all of the variability in their product is, there are a sufficient number of standard deviations between the failure point and the specified point that they are happy with the answer.  They may well have used an Arrhenius relationship in defining that.  When you step further back you are adding more standard deviations and reducing the probability that a component in the distribution tail will fail to perform.

As others have indirectly pointed out, application of this 63% number requires a little thought.  For supply voltages you do not demand that parts work at 63% of nominal power, or at a similar ratio above nominal.  You make sure your supply doesn't generate voltages more than 63% of the difference between nominal and the limit of the most sensitive components.  The concept is simple.  Don't walk on the edge of the cliff if you don't have to.  When you do have to, make sure that you have taken appropriate steps to assure you don't fall off.

63% is a number that has been around for decades and has proven satisfactory.  Would 70% work as well?  In principal no, but in reality the difference probably can't be detected in any reasonable manner.  There is quite a bit of evidence that numbers like 90% and 100% have a significant negative affect on reliability.  And numbers below 60% start making already difficult jobs impossible.  So pick numbers in the 60% to 75% range and no one will have any real facts to argue with.  The argument becomes theological.

 

Online tautech

  • Super Contributor
  • ***
  • Posts: 29671
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: Resilient Design Principles
« Reply #18 on: January 18, 2017, 11:14:27 pm »
63% is an awfully arbitrary number.  Where does that comes from? 
Yes it seems this way and in fact more exercise of my grey matter recalls 62% as the correct #.
As stated this # was in the department rules and exceeded it at your peril.

Quote
So a lot of materials science, and familiarity with typical component specs, goes into making an accurate life estimate.
Consider that MTBF will lower as components are used closer to their maximums.
Again reliability suffers.(resilience as hammy called it)

Quote
If it's impractical (because size or cost) to .................
Then we get into the design cost discussion; long term reliability costs. Period.

Selecting a better suited component so to not in any way stress its ratings is a small additional cost in relation to the overall cost of bringing a project to market however these days even the bean counters have great influence over the BOM instead of the engineers and it only serves to fuel the race to the bottom.  :--
Avid Rabid Hobbyist.
Some stuff seen @ Siglent HQ cannot be shared.
 

Offline Brumby

  • Supporter
  • ****
  • Posts: 12411
  • Country: au
Re: Resilient Design Principles
« Reply #19 on: January 19, 2017, 01:23:37 am »
63% is an awfully arbitrary number.  Where does that comes from? 
Yes it seems this way and in fact more exercise of my grey matter recalls 62% as the correct #.
As stated this # was in the department rules and exceeded it at your peril.

Ah ... A number determined by a bureaucrat.

Say no more.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Resilient Design Principles
« Reply #20 on: January 19, 2017, 01:43:20 am »
63% is an awfully arbitrary number.  Where does that comes from? 
Yes it seems this way and in fact more exercise of my grey matter recalls 62% as the correct #.
As stated this # was in the department rules and exceeded it at your peril.

Ah ... A number determined by a bureaucrat.

Say no more.

^^^^

A number as accurate and unreasoned as "62%" (or 63, or whatever) could only be such. :)

Clearly, it doesn't fit with Arrhenius models, because that requires an absolute reduction in temperature, not a ratio (and only pertains to temperature, not other variables).  That's a valid complaint.

True, a 63% reduction will hand-wavingly accomplish something about as safe -- but applying something so arbitrary completely shuts down a discussion of real component failure modes, and doing a proper good job.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Online tautech

  • Super Contributor
  • ***
  • Posts: 29671
  • Country: nz
  • Taupaki Technologies Ltd. Siglent Distributor NZ.
    • Taupaki Technologies Ltd.
Re: Resilient Design Principles
« Reply #21 on: January 19, 2017, 01:57:07 am »
63% is an awfully arbitrary number.  Where does that comes from? 
Yes it seems this way and in fact more exercise of my grey matter recalls 62% as the correct #.
As stated this # was in the department rules and exceeded it at your peril.

Ah ... A number determined by a bureaucrat.

Say no more.
We/I am talking about a time when the term "bureaucrat" was not common in use as it is today and whatsmore in those days the HOD's, managers, Chief Engineers etc had all got to their positions by knowing their stuff.  :P

I've spent some little time looking for historical (outdated if you must) Engineering design guidelines online to cite but of course those days were well before the internet. Maybe I'll find something.  :-//

Dismiss the figure I've offered if you must, you go your way and I'll go mine.

There's NIL wrong with conservative design.
Avid Rabid Hobbyist.
Some stuff seen @ Siglent HQ cannot be shared.
 

Online CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5516
  • Country: us
Re: Resilient Design Principles
« Reply #22 on: January 19, 2017, 02:01:20 am »
63% is an awfully arbitrary number.  Where does that comes from? 
Yes it seems this way and in fact more exercise of my grey matter recalls 62% as the correct #.
As stated this # was in the department rules and exceeded it at your peril.

Ah ... A number determined by a bureaucrat.

Say no more.

^^^^

A number as accurate and unreasoned as "62%" (or 63, or whatever) could only be such. :)

Clearly, it doesn't fit with Arrhenius models, because that requires an absolute reduction in temperature, not a ratio (and only pertains to temperature, not other variables).  That's a valid complaint.

True, a 63% reduction will hand-wavingly accomplish something about as safe -- but applying something so arbitrary completely shuts down a discussion of real component failure modes, and doing a proper good job.

Tim

The Arrhenius model has been applied successfully to many "stress vs threshold energy" situations.  And failed in many others.  The Arrhenius equation actually only applies in a rather limited set of circumstances which often do not define failure or operation of a device.  So it doesn't even work consistently with absolute temperature as a variable. 

So there are three cases:

1.  Those where Arrhenius is technically correct and applies.
2.  Those where two properties that can be substituted for temperature and activation energy exist, are known and give rates that match reality.  Voltage and a voltage threshold are a common pair.
3.  Everything else.

I was referring to case 2, which is technically not Arrhenius, but is often called that.

 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 28300
  • Country: nl
    • NCT Developments
Re: Resilient Design Principles
« Reply #23 on: January 19, 2017, 02:05:13 am »
Hi

I'm looking for resilient design principles in electronics engineering. Is there something about this topic existing (books or whitepapers)?
I thought there must be a counterpart for the nowadays used "build-as-cheap-a-possible"-design principles.  :-//
You can apply the same math/rules but just set the lifetime of what you are designing for a much longer period.

What is probably not in any book but what will help a lot is thinking about which parts are the most resilient. For example: if a regulator has a lot of voltage across it I add a series resistor (at some distance) which dissipates most of the power. In general: heat, thermal cycles and mechanical stress are the enemies of electronics.
« Last Edit: January 19, 2017, 02:09:08 am by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Online CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5516
  • Country: us
Re: Resilient Design Principles
« Reply #24 on: January 19, 2017, 02:12:16 am »
63% is an awfully arbitrary number.  Where does that comes from? 
Yes it seems this way and in fact more exercise of my grey matter recalls 62% as the correct #.
As stated this # was in the department rules and exceeded it at your peril.

Ah ... A number determined by a bureaucrat.

Say no more.
We/I am talking about a time when the term "bureaucrat" was not common in use as it is today and whatsmore in those days the HOD's, managers, Chief Engineers etc had all got to their positions by knowing their stuff.  :P

I've spent some little time looking for historical (outdated if you must) Engineering design guidelines online to cite but of course those days were well before the internet. Maybe I'll find something.  :-//

Dismiss the figure I've offered if you must, you go your way and I'll go mine.

There's NIL wrong with conservative design.

If you want a more nuanced set of advice on derating, by folks who really care about reliability check out

http://www.navsea.navy.mil/Home/Warfare-Centers/NSWC-Crane/Resources/SD-18/

It is based on decades of experience finding what has failed in a very wide variety of military equipment (communications gear, navigation gear, guidance electronics, flight controls, test gear, weapons and the like).

They have a wide range of recommendations based on component type and application, but you will find that using 62% won't be far off.

« Last Edit: January 19, 2017, 02:23:09 am by CatalinaWOW »
 
The following users thanked this post: tautech


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf