63% is an awfully arbitrary number. Where does that comes from?
But by asking this question, we can think more about the failure modes of particular materials.
Most components fail exponentially with operating conditions (temperature most importantly), so that a constant difference in temperature (not a percentage!) improves lifetime a proportional amount. This is modeled with the Arrhenius equation. The amount of temperature difference required to, say, double the lifetime, depends on activation energy, but it's usually around 10C. Most failures associated with chemical breakdown (like the breakdown of plastics) follow this model.
Some components are subject to diffusion: whether it's a solvent trapped inside a barrier, or a barrier against contaminants outside, the barrier is permeable in either case. Polarized capacitors fail in this way: aluminum electrolytics by release of electrolyte, solid polymers by ingress of moisture. Diffusion follows a T^(3/2) law, so gets considerably worse at high temperatures (but not exponentially, at least until something else happens like the solvent boils or the seals fail).
Power goes as V^2, or I^2, or V*I at least. Temperature usually goes proportional to power, so the temperature rise is halved by dropping V and I to 70% (i.e., to sqrt(2)/2).
Some materials paradoxically get better at high temperatures: ceramics and metals can be annealed to relieve defects. Defects can be caused by high voltage, chemical exposure, radiation, etc. This is a diffusion effect, so the material properties are usually impaired (such as ionic diffusion allowing current to flow through ceramics, or metals creeping under load). But not always: superalloys used for jet engine turbine blades retain their strength nearly all the way up to their melting point!
So a lot of materials science, and familiarity with typical component specs, goes into making an accurate life estimate.
And that's just intrinsic component ratings, for all parameters neatly bounded: resistors and capacitors never exposed to surge voltages, transistors not exposed to surge currents or ESD, that sort of thing.
Real environments have a 1/f^2 distribution of transients: stupendously large surges are very rare (like direct lightning strike!), weak transients are common (ESD, EFT), and nominal operation is, well, nominal (like, 99% of the time spent within ratings?). But if your voltage-to-lifetime function is exponential with voltage, those rare transients will completely dominate the lifetime of your system.
The best recommendation I can make, for operational as well as reliable design, is this: bound your inputs, bound your outputs. Map the input and output ranges as closely as possible.
Example: an amplifier with a 0-5V output range (bounded by saturation to the supplies), with a gain of 5, needs only a 0-1V input. You could clamp any signal below 0V or above 1.0V, and have no change in operation. Which seems to suggest rather the opposite: if it has no effect, why bother adding it? Ah, but if you consider surge inputs as well as nominal inputs, the reason becomes clear. Clamping the input to 1V dissipates a hell of a lot less power during a 10A surge, than clamping it at 5V, or 50V does!
Bounding works for current as well as voltage. If you simply add a clamp device to the input pin, then a high-impedance ESD surge will be clamped nicely, but a low-impedance surge (say from induced lightning on a long cable) will still blow it out (hundreds of amps?). So you might add a resistor in series with the input, to limit current. If it's impractical (because size or cost) to dissipate surges, then a replaceable fusible component can be used. Assuming that replacing parts is part of acceptable operation.
Tim