I will continue to learn for sure. the noise book from 1988 seem nice but old, surprise to see nothing new written on the subject.
Age means nothing -- the theory of electromagnetism was well developed within a handful of decades of its discovery; the only thing that has changed is the ease with which, and the means by which, we are able to interact with the EM field.
There is very little in this book that wouldn't be familiar to, say, the eggheads at Bell Labs wholly a century ago (Zobel, Campbell, et al.), or to even more rarefied groups almost as long again prior -- physicists, or more generally "natural philosophers" as it was at the time: Hertz, Helmholtz, Maxwell, etc.. Even Faraday, despite his reluctance (or perhaps ignorance) for mathematics, probably would've understood much of the concept, purpose and method of the book, perhaps inbetween marveling at the depth of modern technology of course (that we have to deal with waves on a regular basis, not just the E and M separately, that concerned his experiments; not to mention the ease with which we generate said waves).
In other words -- the enduring value of books like this, are the fields (literally, in this case) they deal with. EM itself does not change, only the technology we use to interact with it; and our analytical methods to approach it.
If you compare to a book like Terman's
Radio Engineer's Handbook, or F. Langford-Smith's
Radiotron Designer's Handbook, the mathematical approximations used were much cruder -- the engineer might be working on pencil and paper, slide rule in hand -- and plotted data (nomographs) and empirical measurements were the tools of choice when blindingly-simple equations didn't suffice. Whereas today, not only are more complicated formulas feasible (consider the formulas in common circulation for microstrip impedance, for example), but algorithms entirely, since everyone and their dog has a computer available at all times.
Which are still excellent references, by the way. The simplicity with which they approach problems is refreshing, and the results are no less effective -- we just might have tighter performance demands nowadays (squeezing out every dB of SNR and DR, while minimizing circuit size and power consumption, say), which sometimes suggest alternative solutions, but generally the conventional topologies work fine, or at the very least as starting points.
A contemporary case being the development of electric wave filters, at first single-pole (RC or LR), then k-derived (using a transformation that's easier to work in, resulting in circuits containing poles and zeroes which hampered performance but made them easier to work with), then m-derived (a more general method), and eventually analytical theory (by the... 40s or so, I forget? Crowned by Zverev's
Handbook of Filter Synthesis published finally in 1967), which in sense fully and completely perfected the art. Since then, optimization tools have addressed minor and more practical aspects of filter design: accommodating component selection by commercially available values; ensuring performance over component tolerances; and reducing engineer time by automating much of the process. Not to mention incorporating transmission lines or full field simulation into the process, a critical step for microwave circuitry.
Or for a more direct case: not that exceptionally low power supply impedance was much of a concern back then (tube circuits are fine with some ohms), but most any of these authors would see your optimization problem, and a system diagram, and ask: why not simply increase error amp gain? Or since you don't have control over it (internal to the IC), why not choose a regulator with higher gain? It's surely cheaper and more compact than trying to stuff oversized capacitors into the box!
They would also have the wisdom to ask: do you
really need this? An intrepid one might even perform measurements on your load to see how much ripple it can tolerate before malfunction occurs, and set a maximum at the power supply of a modest fraction below the critical level.
Tim