How (Not) to Sum the Natural Numbers: Zeta Function Regularization

1+2+3+4+5+6+\ldots=-\frac{1}{12}

If you follow Numberphile on YouTube or Bad Astronomy on Slate you’ve already seen this counter-intuitive sum written out. Similarly, if you follow those people or Sciencetopia’s Good Math, Bad Math, you’re aware that the way that sum was presented by Numberphile in that video was seriously flawed.

There is a real sense in which adding up all of the natural numbers (numbers 1, 2, 3…) really does give you minus twelve, despite all the reasons this should be impossible. However, there is also a real sense in which it does not, and cannot, do any such thing. To explain this, I’m going to introduce two concepts: complex analysis and regularization.

This discussion is not going to be mathematically rigorous, but it should give an authentic and accurate view of where these results come from. If you’re interested in the full mathematical details, a later discussion by Numberphile should help, and the mathematically confident should read Terence Tao’s treatment from back in 2010.

With that said, let’s talk about sums! Well, one sum in particular:

\frac{1}{1^s}+\frac{1}{2^s}+\frac{1}{3^s}+\frac{1}{4^s}+\frac{1}{5^s}+\frac{1}{6^s}+\ldots = \zeta(s)

If s is greater than one, then each term in this infinite sum gets smaller and smaller fast enough that you can add them all up and get a number. That number is referred to as \zeta(s), the Riemann Zeta Function.

So what if s is smaller than one?

The infinite sum that I described doesn’t converge for s less than one. Add it up in any reasonable way, and it just approaches infinity. Put another way, the sum is not properly defined. But despite this, \zeta(s) is not infinite for s less than one!

Now as you might object, we only defined the Riemann Zeta Function for s greater than one. How do we know anything at all about it for s less than one?

That is where complex analysis comes in. Complex analysis sounds like a made-up term for something unreasonably complicated, but it’s quite a bit more approachable when you know what it means. Analysis is the type of mathematics that deals with functions, infinite series, and the basis of calculus. It’s often contrasted with Algebra, which usually considers mathematical concepts that are discrete rather than smooth (this definition is a huge simplification, but it’s not very relevant to this post). Complex means that complex analysis deals with functions, not of everyday real numbers, but of complex numbers, or numbers with an imaginary part.

So what does complex analysis say about the Riemann Zeta Function?

One of the most impressive results of complex analysis is the discovery that if a function of a complex number is sufficiently smooth (the technical term is analytic) then it is very highly constrained. In particular, if you know how the function behaves over an area (technical term: open set), then you know how it behaves everywhere else!

If you’re expecting me to explain why this is true, you’ll be disappointed. This is serious mathematics, and serious mathematics isn’t the sort of thing you can give the derivation for in a few lines. It takes as much effort and knowledge to replicate a mathematical result as it does to replicate many lab results in science.

What I can tell you is that this sort of approach crops up in many places, and is part of a general theme. There is a lot you can tell about a mathematical function just by looking at its behavior in some limited area, because mathematics is often much more constrained than it appears. It’s the same sort of principle behind the work I’ve been doing recently.

In the case of the Riemann Zeta Function, we have a definition for s greater than one. As it turns out, this definition still works if s is a complex number, as long as the real part of s is greater than one. Using this information, the value of the Riemann Zeta Function for a large area (half of the complex numbers), complex analysis tells us its value for every other number. In particular, it tells us this:

\zeta(-1)= -\frac{1}{12}

If the Riemann Zeta Function is consistently defined for every complex number, then it must have this value when s is minus one.

If we still trusted the sum definition for this value of s, we could plug in -1 and get

 1+2+3+4+5+6+\ldots=-\frac{1}{12}

Does that make this statement true? Sort of. It all boils down to a concept from physics called regularization.

In physics, we know that in general there is no such thing as infinity. With a few exceptions, nothing in nature should be infinite, and finite evidence (without mathematical trickery) should never lead us to an infinite conclusion.

Despite this, occasionally calculations in physics will give infinite results. Almost always, this is evidence that we are doing something wrong: we are not thinking hard enough about what’s really going on, or there is something we don’t know or aren’t taking into account.

Doing physics research isn’t like taking a physics class: sometimes, nobody knows how to do the problem correctly! In many cases where we find infinities, we don’t know enough about “what’s really going on” to correct them. That’s where regularization comes in handy.

Regularization is the process by which an infinite result is replaced with a finite result (made “regular”), in a way so that it keeps the same properties. These finite results can then be used to do calculations and make predictions, and so long as the final predictions are regularization independent (that is, the same if you had done a different regularization trick instead) then they are legitimate.

In string theory, one way to compute the required dimensions of space and time ends up giving you an infinite sum, a sum that goes 1+2+3+4+5+…. In context, this result is obviously wrong, so we regularize it. In particular, we say that what we’re really calculating is the Riemann Zeta Function, which we happen to be evaluating at -1. Then we replace 1+2+3+4+5+… with -1/12.

Now remember when I said that getting infinities is a sign that you’re doing something wrong? These days, we have a more rigorous way to do this same calculation in string theory, one that never forces us to take an infinite sum. As expected, it gives the same result as the old method, showing that the old calculation was indeed regularization independent.

Sometimes we don’t have a better way of doing the calculation, and that’s when regularization techniques come in most handy. A particular family of tricks called renormalization is quite important, and I’ll almost certainly discuss it in a future post.

So can you really add up all the natural numbers and get -1/12? No. But if a calculation tells you to add up all the natural numbers, and it’s obvious that the result can’t be infinite, then it may secretly be asking you to calculate the Riemann Zeta Function at -1. And that, as we know from complex analysis, is indeed -1/12.

16 thoughts on “How (Not) to Sum the Natural Numbers: Zeta Function Regularization

  1. Googol

    It comes down to semantics, to a certain extent. A layman may similarly complain that x = i is “not actually” a solution to the equation x^2 = -1 (similar thing for the ancients with the equation x^2 = 2). People may complain that x = 1 + 2 + 3 + 4 + … is not a “true” sum. I disagree. That an infinite divergent sum of positive values can have a finite value, let alone a negative one, is counter-intuitive, yes. But it’s what nature seems to be whispering to us. It’s useful. It gives correct predictions. It may be counter-intuitive, for example, that any finite sum of rational numbers is rational and an infinite sum of rational numbers can be irrational (e.g. equal to pi). But it’s true.

    I think this post (http://motls.blogspot.ca/2007/09/zeta-function-regularization.html) put it in a nice way:

    “Theoretical physics – because it is a natural science – has a different set of wisdoms what to do with seemingly meaningless expressions than conventional mathematics has. Sums and integrals in physics mean something else than a prescription for a mechanical algorithm. Instead, they encode “natural” sums and integrals that are supposed to be evaluated by Nature. And She always likes to return a meaningful finite answer. From Her viewpoint, the people who will rant about divergences, infinities, and not even wrong things are just looking at the sum too naively, without using some necessary powerful tools.

    When a physicist writes an integral, she usually doesn’t care whether you use the Lebesgue integral or the Riemann integral. For a physicist, these two and other definitions of an integral are just man-made caricatures to calculate some expressions in practice and to give them a rigorous meaning in a particular system of conventions.

    That’s not exactly what a physicist means by the integral. A physicist always means nothing else than Nature’s integral that coincides with the Riemann and Lebesgue integral in most well-behaved situations. But whenever there is something unusual about the integral, we must leave it up to Nature – not Riemann or Lebesgue – to decide what is the right thing to do with the integral. And we must learn the answer from Her, rather than Riemann or Lebesgue. And indeed, Her answer is often different and brings some additional flavor and rules to calculate. This fact about theoretical physics is virtually impenetrable for most laymen and even for most mathematicians.”

    Like

    Reply
    1. 4gravitonsandagradstudent Post author

      It’s semantics to a certain extent. It’s important to note, though, that there are plenty of physical problems where 1+2+3+4… is indeed infinite. In an infinite universe, if you encounter an infinite series of planets with one apple, then two, then three, etc…then there really are infinite apples.

      1+2+3+4… is -1/12 only when the problem is one that we know should be giving a finite answer, or in more Motlian language, one where nature tells us to seek a finite answer. In my mind, that makes it more appropriate to think about it as a patch for ignorance rather than a truth about the objects involved.

      Like

      Reply
      1. Googol

        Just came across this post again!

        That is an interesting point. In some ways, this reminds me of the idea that flipping two fair, bosonic coins will produce two heads one-third of the time, instead of one-fourth of the time, in Bose Einstein statistics (which would appear obviously wrong to anyone with a basic understanding of statistics). The correct answer to a problem like this indeed seems to depend on the context or domain of application. For example, it would appear that 1 + 1 = 0 is obviously incorrect, but it is an accurate statement for the XOR operation in Boolean algebra, in terms of modular arithmetic. Similarly, there are contexts where exp(-infinity) = 0 and the use of the extended real line make sense, while there are others in geometry where 1/0 = infinity and the use of the projective real line make sense. Some of these regularization methods have been shown to yield accurate, finite answers for combinatorial probelms and QFT, so they seem to be correct (or useful, if you will) in at least those domains.

        Like

        Reply
  2. Zim the Fox

    The way I understand this, from my layman’s point of view, is that, really, it all boils down to what you define a sum to be. If we define the series as the the limit of the sequence of partial sums, then you will get an infinite result (this would be the “natural” answer). But you could conceive many other definitions for the sum, like the Ramanujan summation, that also gives -1/12 for this particular series.

    To illustrate my case, consider 1 + 1 – 1… If we define the sum as the limit of the sequence of partial sums we can clearly see that the limit is undefined. But under many other definitions of summation (Cesàro summation, Ramanujan summation…), this series would add up to 1/2.

    Again, this is from a layman’s point of view. I think I made a proper job of explaining myself, but I could be wrong. I am exhausted and I should reaaaally go to sleep xD But I just remembered your blog existed and I have been reading it (and avoiding homework).

    Like

    Reply
    1. 4gravitonsandagradstudent Post author

      That’s a good way to think about it. The one caveat is that different ways that you define the sum change the sort of math you can do with it. So you can define 1+1-1+… to be 1/2, but then you can’t add and subtract it like they do in the Numberphile video. Different definitions are appropriate in different places.

      Like

      Reply
  3. Just trying to understand our universe

    Thank you for this explanation! As a mathematical relative layperson myself (stopped after intro college Calculus), I’ve been trying to understand the online back-and-forth about what the 1+2+3…=-1/12 really means, and this is the first explanation that I really could understand. Very clear and well done.

    Like

    Reply
  4. Gil

    You said “With a few exceptions, nothing in nature should be infinite”.
    That’s something I’ve been thinking about for a while.
    What are these exceptions ? Total energy and size of the universe, perhaps ? Anything else ?

    Like

    Reply
    1. 4gravitonsandagradstudent Post author

      Those definitely, provided (likely) that the universe is infinite. Beyond that, you’ve got somewhat trivial stuff (1/number of magical unicorns, that sort of thing), but more broadly anything that’s straight-up impossible can be described as an infinity in one way or another. I don’t think there are any other good examples, though.

      Like

      Reply
  5. Collin Merenoff

    Would it make any sense to say that
    sum_n=1^inf f(n) = int_(omega-1)^omega sum_n=1^x f(x) dx
    where omega is considered compatible with arithmetic, as in Conway’s Surreal Numbers.

    I’ve found that this agrees with zeta regularization, although I don’t know why. For example,

    1+2+3+… = omega^2/2 – 1/12

    Like

    Reply
  6. Collin Merenoff

    Oops! That should have been
    sum_n=1^inf f(n) = int_(omega-1)^omega sum_n=1^x f(n) dx

    In this case,
    int_(omega-1)^omega sum_n=1^x x dx
    = int_(omega-1)^omega (x^2/2+x/2) dx
    = omega^3/6 + omega^2/4 – (omega-1)^3/6 – (omega-1)^2/4
    = omega^3/6 + omega^2/4 – omega^3/6 + omega^2/2 – omega/2 + 1/6 – omega^2/4 + omega/2 – 1/4
    = (1/6-1/6) omega^3 + (1/4+1/2-1/4) omega^2 + (-1/2+1/2) omega + (1/6-1/4)
    = omega^2/2 – 1/12

    Like

    Reply
    1. 4gravitonsandagradstudent Post author

      First, it seems like this doesn’t use any special features of omega, if you replace omega with zero from the beginning you get the same result. Second, while it does seem to work for the zeta function, if you plug in even a convergent sum like sum_n=1^inf (1/2)^n, it doesn’t work. So I’m guessing you’ve managed to take advantage of some interesting property of the zeta function, and I’m kind of curious what property it is, but this doesn’t work for general ill-defined sums.

      Like

      Reply
  7. Dr Duncan C Watson

    Why go to anything as complicated as the Zeta function to illustrate regularization of a divergent sum? Let’s use Penrose’s example from his Road to Reality. For -1 < x < 1 we have 1/(1-x) = 1 + x + x^2 + x^3 + x^4 + …. so that for instance 1 + 1/3 + 1/9 + 1/27 + 1/81 + …. = 3/2. For abs(x) > 1 the expansion diverges but the closed-form expression is well-behaved so that in some tongue-in-cheek fashion one could write 1 + 2 + 4 + 8 + 16 + …. = 1/(1-2) = -1 or 1 – 2 + 4 – 8 + 16 – …. = 1/(1+2) = 1/3. The form 1/(1-x) , valid everywhere except at the pole x=1, is the analytic continuation of the expansion 1 + x + x^2 + x^3 + x^4 + …. which is valid for abs(x) < 1. In particular one could write 1 – 1 + 1 – 1 + 1 – …. = 1/2.

    Like

    Reply
    1. 4gravitonsandagradstudent Post author

      While the zeta function tends to attract the most online confusion, in part because of its role in string theory, you’re perfectly right that there are much simpler examples of analytic continuation that people should (hopefully) have an easier time grasping intuitively.

      Like

      Reply
    2. John Martin

      A related function to the riemann zeta function is the normailised ordinary dirichlet series lim (N-> infinity) (sum_n=1_N (1/n^Re(s)))/N^(1-Re(s)) = 1(1-Re(s)) for Re(s) < 1, which has the 1/(1-x) behaviour described above. Hence an alternate regularization would be 1/(1-Re(s)) for Re(s) < 1.
      1+1+1+1+ … = 1, 1+2+3+4+… = 0.5
      https://figshare.com/articles/A_normalisation_of_the_ordinary_Dirichlet_Series_in_the_lower_half_complex_plane_that_has_the_equivalent_normalised_Riemann_Zeta_function_as_an_detrended_envelope_function_/4762339

      Like

      Reply
  8. pollux

    In the Casimir effect,
    you took to smooth plates very close separated by void.
    The quantum fluctuations generate an energy in the same proportion to the sum of the frequencies of waves who can exists between the two plates which one annihilate at the border.
    Where L separates the two plates…
    So the equation look like this 1/L + 2/L + 3/L + 4/L… to the infinite…
    Or (1/L) (1+2+3+4+5…)
    If the results were positive the two plates would repel but if it’s negative they are attracted…
    Guess what they are attracted !
    And guess which value you find when you measure it in the real world ? -1/12 !!!
    Cheers.

    Like

    Reply

Leave a comment! If it's your first time, it will go into moderation.