I remember reading an article by HP many years ago, probably from the HP journal, about their examination of failure statistics which concluded that the vast majority of failures were systematic faults such as design issues, manufacturing process problems (eg. poorly trained operators), apparently insignificant changes in the manufacturing processes of components supplied by third parties etc. Failures were rarely due to random and wearout failures, meaning that MTBFs were pretty much worthless.
Problems included overstressed plastic knobs/switches in scopes weakning over time due to heat or ozone exposure and overheating failures due to inadequate thermal designs etc. Some of these, such as excessive temperatures can be predicted and avoided, but there are many failure modes which are much harder to predict - perhaps potting/insulation breakdown in high voltage areas. Sure they might be similar to previous designs but perhaps the voltage is a bit higher, and/or the chemical composition(s) have changed slightly for cost or regulatory reasons or the cooling airflows are slightly different allowing more dirt, smoke etc to be deposited in a critical area. Extended and accelerated testing, if you have the time and money to do it, may not reveal any problems and it may be a few years before large numbers start to fail in the field.
A big problem is that the life cycle of many designs/technologies/components is too short these days to be able to collect the relevant statistics. Obviously you can learn from previous problems and try to avoid them in new designs but that is likely to be sufficiently different to make many of the previous issues irrelevant. Excepting Dave's favourites including keeping electrolytics away from heat sources, Silastic and proper mounting of TO-220s etc!
I used to work for a telecoms company where 5 9's reliability was taken seriously. I was told a story about one of our products, telephone exchanges which were built into air-conditioned shipping containers. These were sold in the Middle East to accomodate burgeoning demand until permanent facilities could be built. Failures started to occur and an engineer was sent to investigate who discovered that transistors were falling off the PCBs and could be found at the bottom of the racks!
It transpired that the TO-18 metal can parts (similar to BC109s) were manufactured with gold plated legs, but it was known that gold can embrittle solder joints. Thus, to improve reliability the plating had to be chemically stripped from the Kovar leads which were then tinned (I don't know if the manufacture or a third party did this). All very well, except that it left a small part of the lead next to the glass hermetic seal untinned, probably to avoid damaging the seal from thermal shock. Kovar is Nickel-iron and it rusts...
An air conditioned environment in the desert should be one of the driest places on the planet so it was a bit of a puzzle as to what was going on until the engineer happened to arrive on site early one day and discovered that the local maintenance employees, on arriving at work in the mornings, would plug their kettle(s) in inside the container, because there was power available, and as often as not would forget to turn them off turning the exchange into a steam room!
No amount of component MTBF stats and studying pretty bathtub curves help you foresee these sorts of scenarios (albeit a rather extreme example). It would be interesting to know NASA's, and other high reliability industries including the satellite business's take on random/wearout versus systematic failures.
[Edit] Typo: systemic -> systematic