I haven't studied anything QED level or higher, but I understand that's where this sort of math appears.

In regular QM, the typical heavy-handed approach is to set up Schrodinger's equation according to the problem's boundary conditions and potentials, and assume a general form solution psi = c0 + c1*x + c2*x^2 + ..., where the c_n coefficients are potentially complex coefficients and x is the variable (often, vector) of interest (momentum or position or whatever). Then, according to the terms in the differential equation, you solve for the relations between c_n, and contemplate if there's a familiar relation or not.

An example of a familiar relation would be:

c_0 = 0

c_1 = 1

c_n = -c_(n-2) / (n*(n-1))

Note this is a recurrence relation, each subsequent term is written in terms of its predecessor.

This is simply the Taylor series for the sine function, x - x^3 / 3! + x^5 / 5! - x^7 / 7! + ...

Typical solutions of Schrodinger's equation include sines (propagation of matter waves), exponentials (probability decreasing with depth inside a barrier), "particle" type expressions (Gaussian pulses or wavelets propagating freely), orthogonal polynomial series around potential wells (Laguerre polynomials appear in the case for the hydrogen atom), and so on. These are either finite or analytically closed examples, in that the expressions can be written fairly simply. A good reason to cherry-pick them as examples.

In the case of the hydrogen atom, the recurrence relation contains degrees of freedom: integers with certain number ranges for which the differential equation is satisfied. In other words, a family of equations, each one of which is a solution, as well as any combination (superposition) thereof. The solutions themselves are not so important; the fact that there are parameters easily selecting them, however, is important. These selection numbers are the eigenvalues of the system, and are exactly the n, m, l, m_l and m_s parameters used in atomic spectroscopy and chemistry.

But what happens if the result is not familiar, and can only be expressed as an infinite series, regardless of the function space used? Meaning: whether you write it as a plain Taylor series, with infinite terms; or as a series in sines/cosines (the Fourier transform), or exponentials, or etc.; no matter what basis is used, an exact expression requires infinite terms -- it cannot be simplified, there is no closed-form analytical way to write it.

In the case of QED (quantum electrodynamics), it is my understanding that solutions arise which do not converge in the conventional (countable number theory) sense.

Suppose we determined that the equation had the following solution:

c_0 = 1

c_n = -c_(n-1)

Which is simply the Grandi series 1, -1, 1, -1, ...

You're screwed, right? Physics must not exist, because this ludicrous series obviously does not converge.

But us physicists know better. Or we know worse, as the case may be.

The real world obviously exists, so suppose we twist our tongue and say the series

S = 1 - 1 + 1 - 1 + ...

and subtract one,

-1 + S = -1 + (1 - 1 + 1 - 1 + ...)

Now here's the tricky part...

= -S?

And therefore S = -1/2?

Mathematicians will say, ah, you're rearranging the terms, now you're just going off and doing whatever you want, it could be zero, or one, or a unicorn, and you'd have something very peculiar! (Ah, that takes me back.

Dr. Roy would use a phrase something like that. Great teacher, but I digress.) The physicist, on the other hand, merely shrugs it off, saying, hey, it's what we measure in the laboratory!

There is also the case where, instead of n >= 0, negative n are required as well, so the sum spans all integers, -infty to +infty.

A simple example relating to circuits and signals: the Fourier series of a periodic real waveform has conjugate symmetric positive and negative frequency components -- frequencies going forwards and backwards infinitely in time, summing to express an arbitrary signal of finite time span. (Of course, a real, finite signal has finite bandwidth, which is to say, the terms converge. The worst analytical case is a discontinuous signal like a square or sawtooth wave, for which the partial sums exhibit Gibbs phenomenon. The infinite series nonetheless converges as it should.)

Such a sum can have consistent partial sums, e.g.

0

-1 + 0 + 1

-2 + (-1) + 0 + 1 + 2

...

i.e., growing at both ends simultaneously, so the sum converges. But is this consistent? We're kind of starting in the middle of something new. A much stronger case would be made if we can express this in terms of something that's already provably convergent or divergent -- a single-ended series, from 0 to infinity.

If the above series converges, then we can take a sub-series from it, which will also be convergent.

Choose the sub-series from -infinity to 0, and the remainder, 0 to infinity.

Does that sub-series converge? Aha, neither one does! Indeed, that it converges for the double-ended case is, at best, an arbitrary gimmick!

An example from calculus: suppose, instead of a summation, an integration is required. What is the integral from -infty to +infty of x dx? Canonically, the infinite integral is a shorthand way of writing:

lim_(a -> +infty) of lim_(b -> -infty) of Int(from b to a) x dx

In other words, the limits are independent, and since the indefinite integral is x^2, which has a value of infinity at +/-infty, the result is also infinite.

But suppose we were tricky and performed it with just one limit simultaneously: Int(from -a to a) instead? Now the integral magically pops to zero! But we're admitting madness, because setting b = -a is quite arbitrary, no? Why not b = 1 - a, or something else equally arbitrary? After all, lim(a -> +infty) of (1-a) = -infty, so we haven't apparently lost anything. But the integrand is different in that case. How should we know what the correct result is, if there is one, and whether it is a unicorn or not?

Once again, the physicist merely shrugs, and at worst, parameterizes the expression so he can solve backwards based on what the experiment says it should be. Sometimes, parameters like these appear in the theory, and have to be fixed as degrees of freedom -- "best fit" variables for the model. I don't know if any example equivalent to this actually appears in Standard Model physics; the integration method (Lebesgue integration), likely, but with some justification on the choice of substitution rather than leaving it variable.

Tim