For sure. I used to be good at math taught in school, but have a hard time with understanding some of the more advanced equations now. Things like Pythagoras or trigonometry are still ok, but differential equations and such take digging very deep.
Would like to learn more about calculating coefficients for filters in DSP, but there are so many other things I also like to know or do too. Hard to choose and I often dose of half way through when reading stuff.
Well now that's an easier one, at least! Well, depending on what aspect you're looking for, heh.
First you need the basics: IIR or FIR type. For analysis, the Z-transform is analogous to the Fourier transform; despite Z being simply a time shift (z^-1 means taking the previous sample x[n-1] while evaluating an equation at sample n), it works out the same, and indeed there exists an isomorphism, mapping the unit circle (poles within = stable, poles without = unstable) to the Fourier half-plane (left poles = stable, right poles = unstable). So you can perfectly transform a given continuous-time (RLC) filter to a discrete-time (sampled) filter, within restrictions of sample rate and all that of course.
Of IIR, there are a few methods to use, of which the most popular / easy / stable / general is probably the biquad. Five coefficients which can be tuned for any 2nd-order filter -- bandpass/stop, LP/HP... uhh and maybe all-pass, not sure? In short, a rational 2nd order i.e. two poles and two zeroes, so, take your pick. Solutions are straightforward enough:
https://www.earlevel.com/main/2013/10/13/biquad-calculator-v2/View source to see the expressions in the JS, should be easy enough to understand even if you don't know JS exactly.
The derivation of these formulas, in turn, will be along the same lines as traditional analytical filters: position the poles/zeroes for a given desired response, or approximations to a straight-line (passband within x dB, stopband beyond y dB, etc.) response.
And a single biquad stage is 2nd order, what to do with higher orders? Same thing we do with op-amps of course: each stage is two poles and cascade them as needed for a desired higher-order overall response. So the first has close-in poles, and the next are higher Q, and etc. until you have the total response for whatever, Butterworth or Cheb. or etc. type you wanted.
Repeated 2nd-order blocks are preferred over a single higher-order block, because of sensitivity: while it's possible to implement an IIR filter that way, accumulator bits / coefficient precision becomes much more critical. Kind of like -- I suppose by analogy, consider the Sallen-Key filter, which is said to be more sensitive to component values/tolerances than others like the MFB topology; well, suppose you go further, using a nth order Sallen-Key (yes, higher order active filters, around a single op-amp, are possible!), which you can imagine will be that much more sensitive to component values still; and then consider that the poles are positioned within the unit circle rather than on the plane and a small numerical error can easily push them outside the circle (poles/zeroes of a polynomial expression are highly sensitive to the coefficient values, it's an "ill conditioned" problem in general), and that error can arise from both rounding of the coefficients and truncation in the accumulator.
I suppose if you just use for example floating point (at float or double precision as the case may be), a higher-order filter might be tolerable, and maybe you can save a few cycles on the computation by making it a little more compact, or using fewer memory ops or whatever. For most purposes, with say 16-bit fixed point, or 32-bit (or below) floats (standard or non-), 2nd order is probably preferred.
As for FIR, they're quite trivial: the coefficients are the impulse response, done and done. You're literally cranking the convolution of the input with the filter's response, for each and every sample of the output. So, you want a Gaussian response? Plot a Gaussian hump and away you go! Want something sharper? Add some ringing, using whatever exponential or sinc shaped waveforms you might like; or choose any of the various well-known window functions for their respective spectral properties; it's all very easy to do, perfectly stable, and the only downside is, if you need a low frequency cutoff, well, you're going to need to convolve a heck of a lot of samples...
So IIR tends to be better for low cutoffs, but depending on how much CPU power / memory bandwidth you have available, either is often suitable.
So, hardest part, is as hard as any other filter -- polynomial solutions, of arbitrary order, approximating some frequency response. Easiest part is, I'd say easier than analog filters, it's literally just the mechanical process of adding samples multiplied by coefficients (MAC operation). In a FIR, no coefficients interact with each other, it's unconditionally stable; in IIR, it's equivalent to an active (analog) filter.
And sometimes (often, even?!) we don't even bother with that, because the shitty frequency response of a "boxcar" filter (rect[n] window, sliding average) is a suitable sacrifice for its simplicity: just toss each sample into a circular buffer, add the latest (nth) sample to the accumulator and subtract the last (n-Nth) sample -- the one that just fell out of the buffer. Absolutely zero multiplication required (well, aside from normalizing the output gain), and all it needs is memory -- of which only two accesses are needed per sample. Which is a kind of example (I think?) of a CIC ("cascaded integrator comb") filter, notice the summation per sample (the output/accumulator value) is taking the previous value plus the difference between nth and n-Nth samples -- it's the integral of a derivative. Which does mean the value could become offset accidentally (integral of a derivative equals the function "plus a constant", which is to say, the DC offset is undefined in general), but in a deterministic computer, that "accident" by definition will never happen and the "plus a constant" equals the initialized offset (which in turn makes it a definite integral starting from zero, "plus a constant" accounted for).
As for diff eq, I'm not too into it, but I never really needed more than linear equations anyway (with, again, polynomial solutions -- filters, control loops, etc.). And anything worse I'd gladly just plug into a numerical (or CAS) solver; particular, analytical solutions are likely to be more curiosity than practical (e.g. Bessel functions that aren't any easier to manipulate).
There is one interesting problem I've played with: the temperature distribution on a flat sheet, for a circular isothermal heat source. Assuming convection proportional to temp rise, we have heat loss at any given point on the sheet depending on its temperature, while also spreading out (through the sheet) to a larger radius, where there's more (differential) area to dissipate heat into, etc. Obviously the sheet won't be uniformly heated -- at a basic guess, it should be reciprocal or logarithmic with distance, because of the available area at a given radius -- but because the loss depends on temp as well, it must be something just different from that. Okay well, set up the equations, push things around a bit and, fair enough, there's an equation I can't solve. Let me see what Wolfram Alpha thinks of it.

Turns out it's a Bessel function, with the first zero I think at the edge of the sheet (for some circular outer edge; which we can take to infinity if we like). I forget the exact proportions that go into and around the function, but yeah, it's hottest in the center, dropping as radius goes up, and not quite as 1/r or ln(r/r_0) or anything, it's a little bit different. So that was a cool problem.
Suppose I should go set that up again and see what the exact ratios were...
I recall plugging in the heat spreading rate of PCB stock (approximately anyway; it's mostly due to the copper anyway, so take the total thickness across however many layers you've poured/planed) and getting about an inch or two radius -- which is just as we expect for the hot spot on a 2oz 2-layer PCB, and for say a D(2)PAK or so, the total power dissipation (for reasonable max temp rise) is around... 5W or so, I think it was? So, even as poor an estimate as proportional convection loss is (it's actually steeper than proportional, and depends on orientation, and bits above the hot-part have a lower coefficient than below because, well, the rising air is already warmed!...etc...), it's not too bad overall.
Other examples I've applied diff. eq. to, include uniform heat dissipation along a heat spreader sunk at one end (temp goes quadratically with distance, vertex at the uncooled end -- makes sense), or
the hold-up time of an SMPS (also a quadratic). Quadratics and exponentials are nice as you need no tricks to solve them (or just one simple trick for exponentials), just integrate and go.
Or the uh... what are some other recent workings-out, *thumbs through notes*, oh yeah:
- A couple, just, simple proofs reminding of certain integral solutions (half charge/energy point of an exponential decay; RMS of a wave)
- Simplifying complex arithmetic (for JS implementation)
- Transfer functions for certain RLC networks
- Or uh, going back a couple years, I derived a "tuning" equation for the series-resonant induction heater circuit. Despite being 3 elements in a 2nd-order system, this involves a 4th order polynomial solution (basically because the solution isn't precisely symmetrical for ±ω i.e. can be reduced in terms of ω^2 --> ω, which is to say, of the form a ω^4 + b ω^2 + c = 0), but the solution is very close to the nearest quadratic* so a numerical solution converges rapidly.
*Hm, I wonder in what sense polynomials can be projected into each other; in the sense of projective spaces, mapping a higher-dimension space to a lower one. I suppose you have your choice of projections though, both in the linear algebraic sense (take whatever [linear] map you like), and any kind of polynomial (or even more complicated) function you might apply. (Compare 3D perspective projection, where P:[x, y, z] --> [x/z, y/z], a rational relation.)
Well, I digress...
Tim