Author Topic: -1/12 result  (Read 1829 times)

0 Members and 1 Guest are viewing this topic.

Offline Galenbo

  • Super Contributor
  • ***
  • Posts: 1474
  • Country: be
-1/12 result
« on: July 03, 2014, 08:57:22 am »
I saw a video about adding all positive numbers, giving the result of -1/12



The math and proof "looks' nice, I didn't full examine it, but there are other examples where for ex. a sum doesn't diverge where the numbers do.

They talk about this result being used in many areas of physics?
I never saw that in any of my physics classes.

Do you have an example of where this is used?
If you try and take a cat apart to see how it works, the first thing you have on your hands is a nonworking cat.
 

Offline DJohn

  • Regular Contributor
  • *
  • Posts: 103
  • Country: gb
Re: -1/12 result
« Reply #1 on: July 03, 2014, 11:03:14 am »
It's a bit of a dodgy result, really.  By the usual definition, the series 1+2+3+... does not converge.

To find the sum of an infinite series, we take the sequence of partial sums (in this case 1, 1+2, 1+2+3, ...) and see if it has a limit.  This one doesn't: it just keeps getting bigger and bigger.  Sensible people leave it there.

If you insist on finding a number that in some way could be regarded as the sum of the series, the video shows the way to do it.  We find a function, that when evaluated at a particular point would give the sum of the series.  But because the series doesn't converge, the function is undefined at that point.  Now complex analysis comes to the rescue: one of the neater results is that if you have an analytic function (that just means it has a derivative) defined on some region of the plane, you can extend it to a larger region in only one way.  This is called analytic continuation.

The analytic continuation of this function looks nothing like the series we started with, but it does produce the same values of the function we started with (over the region that it's defined), and it is defined at the point we need.  Its value there is -1/12.  So if you really really insisted that this series have some kind of number associated with it, that's not a bad choice.

I think the assocation with physics comes from quantum electrodynamics.  The calculations there are sums over all of the different ways that an event could happen.  There are infinitely many of them, and when you add them all together you find that the sum diverges.  There's a trick that they use called "renormalization" to get around this.  Mathematicians protest that you can't do that, but the physicists respond that if they do, they get the right answers.  I suspect that this "renormalization" involves analytic continuation in just this way.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 15664
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: -1/12 result
« Reply #2 on: July 03, 2014, 11:30:32 am »
I haven't studied anything QED level or higher, but I understand that's where this sort of math appears.

In regular QM, the typical heavy-handed approach is to set up Schrodinger's equation according to the problem's boundary conditions and potentials, and assume a general form solution psi = c0 + c1*x + c2*x^2 + ..., where the c_n coefficients are potentially complex coefficients and x is the variable (often, vector) of interest (momentum or position or whatever).  Then, according to the terms in the differential equation, you solve for the relations between c_n, and contemplate if there's a familiar relation or not.

An example of a familiar relation would be:

c_0 = 0
c_1 = 1
c_n = -c_(n-2) / (n*(n-1))

Note this is a recurrence relation, each subsequent term is written in terms of its predecessor.

This is simply the Taylor series for the sine function, x - x^3 / 3! + x^5 / 5! - x^7 / 7! + ...

Typical solutions of Schrodinger's equation include sines (propagation of matter waves), exponentials (probability decreasing with depth inside a barrier), "particle" type expressions (Gaussian pulses or wavelets propagating freely), orthogonal polynomial series around potential wells (Laguerre polynomials appear in the case for the hydrogen atom), and so on.  These are either finite or analytically closed examples, in that the expressions can be written fairly simply.  A good reason to cherry-pick them as examples. :)

In the case of the hydrogen atom, the recurrence relation contains degrees of freedom: integers with certain number ranges for which the differential equation is satisfied.  In other words, a family of equations, each one of which is a solution, as well as any combination (superposition) thereof.  The solutions themselves are not so important; the fact that there are parameters easily selecting them, however, is important.  These selection numbers are the eigenvalues of the system, and are exactly the n, m, l, m_l and m_s parameters used in atomic spectroscopy and chemistry.

But what happens if the result is not familiar, and can only be expressed as an infinite series, regardless of the function space used?  Meaning: whether you write it as a plain Taylor series, with infinite terms; or as a series in sines/cosines (the Fourier transform), or exponentials, or etc.; no matter what basis is used, an exact expression requires infinite terms -- it cannot be simplified, there is no closed-form analytical way to write it.

In the case of QED (quantum electrodynamics), it is my understanding that solutions arise which do not converge in the conventional (countable number theory) sense.

Suppose we determined that the equation had the following solution:

c_0 = 1
c_n = -c_(n-1)

Which is simply the Grandi series 1, -1, 1, -1, ...

You're screwed, right?  Physics must not exist, because this ludicrous series obviously does not converge.

But us physicists know better.  Or we know worse, as the case may be. :)  The real world obviously exists, so suppose we twist our tongue and say the series
S = 1 - 1 + 1 - 1 + ...
and subtract one,
-1 + S = -1 + (1 - 1 + 1 - 1 + ...)
Now here's the tricky part...
= -S?
And therefore S = -1/2?

Mathematicians will say, ah, you're rearranging the terms, now you're just going off and doing whatever you want, it could be zero, or one, or a unicorn, and you'd have something very peculiar!  (Ah, that takes me back.  Dr. Roy would use a phrase something like that.  Great teacher, but I digress.)  The physicist, on the other hand, merely shrugs it off, saying, hey, it's what we measure in the laboratory!

There is also the case where, instead of n >= 0, negative n are required as well, so the sum spans all integers, -infty to +infty.

A simple example relating to circuits and signals: the Fourier series of a periodic real waveform has conjugate symmetric positive and negative frequency components -- frequencies going forwards and backwards infinitely in time, summing to express an arbitrary signal of finite time span.  (Of course, a real, finite signal has finite bandwidth, which is to say, the terms converge.  The worst analytical case is a discontinuous signal like a square or sawtooth wave, for which the partial sums exhibit Gibbs phenomenon.  The infinite series nonetheless converges as it should.)

Such a sum can have consistent partial sums, e.g.
0
-1 + 0 + 1
-2 + (-1) + 0 + 1 + 2
...
i.e., growing at both ends simultaneously, so the sum converges.  But is this consistent?  We're kind of starting in the middle of something new.  A much stronger case would be made if we can express this in terms of something that's already provably convergent or divergent -- a single-ended series, from 0 to infinity.

If the above series converges, then we can take a sub-series from it, which will also be convergent.
Choose the sub-series from -infinity to 0, and the remainder, 0 to infinity.
Does that sub-series converge?  Aha, neither one does!  Indeed, that it converges for the double-ended case is, at best, an arbitrary gimmick!

An example from calculus: suppose, instead of a summation, an integration is required.  What is the integral from -infty to +infty of x dx?  Canonically, the infinite integral is a shorthand way of writing:
lim_(a -> +infty) of lim_(b -> -infty) of Int(from b to a) x dx
In other words, the limits are independent, and since the indefinite integral is x^2, which has a value of infinity at +/-infty, the result is also infinite.

But suppose we were tricky and performed it with just one limit simultaneously: Int(from -a to a) instead?  Now the integral magically pops to zero!  But we're admitting madness, because setting b = -a is quite arbitrary, no?  Why not b = 1 - a, or something else equally arbitrary?  After all, lim(a -> +infty) of (1-a) = -infty, so we haven't apparently lost anything.  But the integrand is different in that case.  How should we know what the correct result is, if there is one, and whether it is a unicorn or not?

Once again, the physicist merely shrugs, and at worst, parameterizes the expression so he can solve backwards based on what the experiment says it should be.  Sometimes, parameters like these appear in the theory, and have to be fixed as degrees of freedom -- "best fit" variables for the model.  I don't know if any example equivalent to this actually appears in Standard Model physics; the integration method (Lebesgue integration), likely, but with some justification on the choice of substitution rather than leaving it variable.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf