Tag Archives: mathematics

Why a New Particle Matters

A while back, when the MiniBoone experiment announced evidence for a sterile neutrino, I was excited. It’s still not clear whether they really found something, here’s an article laying out the current status. If they did, it would be a new particle beyond those predicted by the Standard Model, something like the neutrinos but which doesn’t interact with any of the fundamental forces except gravity.

At the time, someone asked me why this was so exciting. Does it solve the mystery of dark matter, or any other long-standing problems?

The sterile neutrino MiniBoone is suggesting isn’t, as far as I’m aware, a plausible candidate for dark matter. It doesn’t solve any long-standing problems (for example, it doesn’t explain why the other neutrinos are so much lighter than other particles). It would even introduce new problems of its own!

It still matters, though. One reason, which I’ve talked about before, is that each new type of particle implies a new law of nature, a basic truth about the universe that we didn’t know before. But there’s another reason why a new particle matters.

There’s a malaise in particle physics. For most of the twentieth century, theory and experiment were tightly linked. Unexpected experimental results would demand new theory, which would in turn suggest new experiments, driving knowledge forward. That mostly stopped with the Standard Model. There are a few lingering anomalies, like the phenomena we attribute to dark matter, that show the Standard Model can’t be the full story. But as long as every other experiment fits the Standard Model, we have no useful hints about where to go next. We’re just speculating, and too much of that warps the field.

Critics of the physics mainstream pick up on this, but I’m not optimistic about what I’ve seen of their solutions. Peter Woit has suggested that physics should emulate the culture of mathematics, caring more about rigor and being more careful to confirm things before speaking. The title of Sabine Hossenfelder’s “Lost in Math” might suggest the opposite, but I get the impression she’s arguing for something similar: that particle physicists have been using sloppy arguments and should clean up their act, taking foundational problems seriously and talking to philosophers to help clarify their ideas.

Rigor and clarity are worthwhile, but the problems they’ll solve aren’t the ones causing the malaise. If there are problems we can expect to solve just by thinking better, they’re problems that we found by thinking in the first place: quantum gravity theories that stop making sense at very high energies, paradoxical thought experiments with black holes. There, rigor and clarity can matter: to some extent they’re already there, but I can appreciate the argument that it’s not yet nearly enough.

What rigor and clarity won’t do is make physics feel (and function) like it did in the twentieth century. For that, we need new evidence: experiments that disobey the Standard Model, and do it in a clear enough way that we can’t just chalk it up to predictable errors. We need a new particle, or something like it. Without that, our theories are most likely underdetermined by the data, and anything we propose is going to be subjective. Our subjective judgements may get better, we may get rid of the worst-justified biases, but at the end of the day we still won’t have enough information to actually make durable progress.

That’s not a popular message, in part, because it’s not something we can control. There’s a degree of helplessness in realizing that if nature doesn’t throw us a bone then we’ll probably just keep going in circles forever. It’s not the kind of thing that lends itself to a pithy blog post.

If there’s something we can do, it’s to keep our eyes as open as possible, to make sure we don’t miss nature’s next hint. It’s why people are getting excited about low-energy experiments, about precision calculations, about LIGO. Even this seemingly clickbaity proposal that dark matter killed the dinosaurs is motivated by the same sort of logic: if the only evidence for dark matter we have is gravitational, what can gravitational evidence tell us about what it’s made of? In each case, we’re trying to widen our net, to see new phenomena we might have missed.

I suspect that’s why this reviewer was disappointed that Hossenfelder’s book lacked a vision for the future. It’s not that the book lacked any proposals whatsoever. But it lacked this kind of proposal, of a new place to look, where new evidence, and maybe a new particle, might be found. Without that we can still improve things, we can still make progress on deep fundamental mathematical questions, we can kill off the stupidest of the stupid arguments. But the malaise won’t lift, we won’t get back to the health of twentieth century physics. For that, we need to see something new.

Advertisements

Quelques Houches

For the last two weeks I’ve been at Les Houches, a village in the French Alps, for the Summer School on Structures in Local Quantum Field Theory.

IMG_20180614_104537425

To assist, we have a view of some very large structures in local quantum field theory

Les Houches has a long history of prestigious summer schools in theoretical physics, going back to the activity of Cécile DeWitt-Morette after the second world war. This was more of a workshop than a “school”, though: each speaker gave one talk, and they weren’t really geared for students.

The workshop was organized by Dirk Kreimer and Spencer Bloch, who both have a long track record of work on scattering amplitudes with a high level of mathematical sophistication. The group they invited was an even mix of physicists interested in mathematics and mathematicians interested in physics. The result was a series of talks that managed to both be thoroughly technical and ask extremely deep questions, including “is quantum electrodynamics really an asymptotic series?”, “are there simple graph invariants that uniquely identify Feynman integrals?”, and several talks about something called the Spine of Outer Space, which still sounds a bit like a bad sci-fi novel. Along the way there were several talks showcasing the growing understanding of elliptic polylogarithms, giving me an opportunity to quiz Johannes Broedel about his recent work.

While some of the more mathematical talks went over my head, they spurred a lot of productive dialogues between physicists and mathematicians. Several talks had last-minute slides, added as a result of collaborations that happened right there at the workshop. There was even an entire extra talk, by David Broadhurst, based on work he did just a few days before.

We also had a talk by Jaclyn Bell, a former student of one of the participants who was on a BBC reality show about training to be an astronaut. She’s heavily involved in outreach now, and honestly I’m a little envious of how good she is at it.

Be Rational, Integrate Our Way!

I’ve got another paper up this week with Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, about integrating Feynman diagrams.

If you’ve been following this blog for a while, you might be surprised: most of my work avoids Feynman diagrams at all costs. I’ve changed my mind, in part, because it turns out integrating Feynman diagrams can be a lot easier than I had thought.

At first, I thought Feynman integrals would be hard purely because they’re integrals. Those of you who’ve taken calculus might remember that, while taking derivatives was just a matter of following the rules, doing integrals required a lot more thought. Rather than one set of instructions, you had a set of tricks, meant to try to match your integral to the derivative of some known function. Sometimes the tricks worked, sometimes you just ended up completely lost.

As it turns out, that’s not quite the problem here. When I integrate a Feynman diagram, most of the time I’m expecting a particular kind of result, called a polylogarithm. If you know that’s the end goal, then you really can just follow the rules, using partial-fractioning to break your integral up into simpler integrations, linear pieces that you can match to the definition of polylogarithms. There are even programs that do this for you: Erik Panzer’s HyperInt is an especially convenient one.

maplewhining

Or it would be convenient, if Maple’s GUI wasn’t cursed…

Still, I wouldn’t have expected Feynman integrals to work particularly well, because they require too many integrations. You need to integrate a certain number of times to define a polylogarithm: for the ones we get out of Feynman diagrams, it’s two integrations for each loop the diagram has. The usual ways we calculate Feynman diagrams lead to a lot more integrations: the systematic method, using something called Symanzik polynomials, involves one integration per particle line in the diagram, which usually adds up to a lot more than two per loop.

When I arrived at the Niels Bohr Institute, I assumed everyone in my field knew about Symanzik polynomials. I was surprised when it turned out Jake Bourjaily hadn’t even heard of them. He was integrating Feynman diagrams by what seemed like a plodding, unsystematic method, taking the intro example from textbooks and just applying it over and over, gaining no benefit from all of the beautiful graph theory that goes into the Symanzik polynomials.

I was even more surprised when his method turned out to be the better one.

Avoid Symanzik polynomials, and you can manage with a lot fewer integrations. Suddenly we were pretty close to the “two integrations per loop” sweet spot, with only one or two “extra” integrations to do.

A few more advantages, and Feynman integrals were actually looking reasonable. The final insight came when we realized that just writing the problem in the right variables made a huge difference.

HyperInt, as I mentioned, tries to break a problem up into simpler integrals. Specifically, it’s trying to make things linear in the integration variable. In order to do this, sometimes it has to factor quadratic polynomials, like so:

partialfractionformula

Notice the square roots in this formula? Those can make your life a good deal trickier. Once you’ve got irrational functions in the game, HyperInt needs extra instructions for how to handle them, and integration is a lot more cumbersome.

The last insight, then, and the key point in our paper, is to avoid irrational functions. To do that, we use variables that rationalize the square roots.

We get these variables from one of the mainstays of our field, called momentum twistors. These variables are most useful in our favorite theory of N=4 super Yang-Mills, but they’re useful in other contexts too. By parametrizing them with a good “chart”, one with only the minimum number of variables we need to capture the integral, we can rationalize most of the square roots we encounter.

That “most” is going to surprise some people. We rationalized all of the expected square roots, letting us do integrals all the way to four loops in a few cases. But there were some unexpected square roots, and those we couldn’t rationalize.

These unexpected square roots don’t just make our life more complicated, if they stick around in a physically meaningful calculation they’ll upset a few other conjectures as well. People had expected that these integrals were made of certain kinds of “letters”, organized by a mathematical structure called a cluster algebra. That cluster algebra structure doesn’t have room for square roots, which suggests that it can’t be the full story here.

The integrals that we can do, though, with no surprise square roots? They’re much easier than anyone expected, much easier than with any other method. Rather than running around doing something fancy, we just integrated things the simple, rational way…and it worked!

Calabi-Yaus for Higgs Phenomenology

less joking title:

You Didn’t Think We’d Stop at Elliptics, Did You?

When calculating scattering amplitudes, I like to work with polylogarithms. They’re a very well-understood type of mathematical function, and thus pretty easy to work with.

Even for our favorite theory of N=4 super Yang-Mills, though, they’re not the whole story. You need other types of functions to represent amplitudes, elliptic polylogarithms that are only just beginning to be properly understood. We had our own modest contribution to that topic last year.

You can think of the difference between these functions in terms of more and more complicated curves. Polylogarithms just need circles or spheres, elliptic polylogarithms can be described with a torus.

A torus is far from the most complicated curve you can think of, though.

983px-calabi_yau_formatted-svgString theorists have done a lot of research into complicated curves, in particular ones with a property called Calabi-Yau. They were looking for ways to curl up six or seven extra dimensions, to get down to the four we experience. They wanted to find ways of curling that preserved some supersymmetry, in the hope that they could use it to predict new particles, and it turned out that Calabi-Yau was the condition they needed.

That hope, for the most part, didn’t pan out. There were too many Calabi-Yaus to check, and the LHC hasn’t seen any supersymmetric particles. Today, “string phenomenologists”, who try to use string theory to predict new particles, are a relatively small branch of the field.

This research did, however, have lasting impact: due to string theorists’ interest, there are huge databases of Calabi-Yau curves, and fruitful dialogues with mathematicians about classifying them.

This has proven quite convenient for us, as we happen to have some Calabi-Yaus to classify.

traintrackpic

Our midnight train going anywhere…in the space of Calabi-Yaus

We call Feynman diagrams like the one above “traintrack integrals”. With two loops, it’s the elliptic integral we calculated last year. With three, though, you need a type of Calabi-Yau curve called a K3. With four loops, it looks like you start needing Calabi-Yau three-folds, the type of space used to compactify string theory to four dimensions.

“We” in this case is myself, Jacob Bourjaily, Andrew McLeod, Matthias Wilhelm, and Yang-Hui He, a Calabi-Yau expert we brought on to help us classify these things. Our new paper investigates these integrals, and the more and more complicated curves needed to compute them.

Calabi-Yaus had been seen in amplitudes before, in diagrams called “sunrise” or “banana” integrals. Our example shows that they should occur much more broadly. “Traintrack” integrals appear in our favorite N=4 super Yang-Mills theory, but they also appear in theories involving just scalar fields, like the Higgs boson. For enough loops and particles, we’re going to need more and more complicated functions, not just the polylogarithms and elliptic polylogarithms that people understand.

(And to be clear, no, nobody needs to do this calculation for Higgs bosons in practice. This diagram would calculate the result of two Higgs bosons colliding and producing ten or more Higgs bosons, all at energies so high you can ignore their mass, which is…not exactly relevant for current collider phenomenology. Still, the title proved too tempting to resist.)

Is there a way to understand traintrack integrals like we understand polylogarithms? What kinds of Calabi-Yaus do they pick out, in the vast space of these curves? We’d love to find out. For the moment, we just wanted to remind all the people excited about elliptic polylogarithms that there’s quite a bit more strangeness to find, even if we don’t leave the tracks.

Path Integrals and Loop Integrals: Different Things!

When talking science, we need to be careful with our words. It’s easy for people to see a familiar word and assume something totally different from what we intend. And if we use the same word twice, for two different things…

I’ve noticed this problem with the word “integral”. When physicists talk about particle physics, there are two kinds of integrals we mention: path integrals, and loop integrals. I’ve seen plenty of people get confused, and assume that these two are the same thing. They’re not, and it’s worth spending some time explaining the difference.

Let’s start with path integrals (also referred to as functional integrals, or Feynman integrals). Feynman promoted a picture of quantum mechanics in which a particle travels along many different paths, from point A to point B.

three_paths_from_a_to_b

You’ve probably seen a picture like this. Classically, a particle would just take one path, the shortest path, from A to B. In quantum mechanics, you have to add up all possible paths. Most longer paths cancel, so on average the short, classical path is the most important one, but the others do contribute, and have observable, quantum effects. The sum over all paths is what we call a path integral.

It’s easy enough to draw this picture for a single particle. When we do particle physics, though, we aren’t usually interested in just one particle: we want to look at a bunch of different quantum fields, and figure out how they will interact.

We still use a path integral to do that, but it doesn’t look like a bunch of lines from point A to B, and there isn’t a convenient image I can steal from Wikipedia for it. The quantum field theory path integral adds up, not all the paths a particle can travel, but all the ways a set of quantum fields can interact.

How do we actually calculate that?

One way is with Feynman diagrams, and (often, but not always) loop integrals.

4grav2loop

I’ve talked about Feynman diagrams before. Each one is a picture of one possible way that particles can travel, or that quantum fields can interact. In some (loose) sense, each one is a single path in the path integral.

Each diagram serves as instructions for a calculation. We take information about the particles, their momenta and energy, and end up with a number. To calculate a path integral exactly, we’d have to add up all the diagrams we could possibly draw, to get a sum over all possible paths.

(There are ways to avoid this in special cases, which I’m not going to go into here.)

Sometimes, getting a number out of a diagram is fairly simple. If the diagram has no closed loops in it (if it’s what we call a tree diagram) then knowing the properties of the in-coming and out-going particles is enough to know the rest. If there are loops, though, there’s uncertainty: you have to add up every possible momentum of the particles in the loops. You do that with a different integral, and that’s the one that we sometimes refer to as a loop integral. (Perhaps confusingly, these are also often called Feynman integrals: Feynman did a lot of stuff!)

\frac{i^{a+l(1-d/2)}\pi^{ld/2}}{\prod_i \Gamma(a_i)}\int_0^\infty...\int_0^\infty \prod_i\alpha_i^{a_i-1}U^{-d/2}e^{iF/U-i\sum m_i^2\alpha_i}d\alpha_1...d\alpha_n

Loop integrals can be pretty complicated, but at heart they’re the same sort of thing you might have seen in a calculus class. Mathematicians are pretty comfortable with them, and they give rise to numbers that mathematicians find very interesting.

Path integrals are very different. In some sense, they’re an “integral over integrals”, adding up every loop integral you could write down. Mathematicians can define path integrals in special cases, but it’s still not clear that the general case, the overall path integral picture we use, actually makes rigorous mathematical sense.

So if you see physicists talking about integrals, it’s worth taking a moment to figure out which one we mean. Path integrals and loop integrals are both important, but they’re very, very different things.

Writing the Paper Changes the Results

You spent months on your calculation, but finally it’s paid off. Now you just have to write the paper. That’s the easy part, right?

Not quite. Even if writing itself is easy for you, writing a paper is never just writing. To write a paper, you have to make your results as clear as possible, to fit them into one cohesive story. And often, doing that requires new calculations.

It’s something that first really struck me when talking to mathematicians, who may be the most extreme case. For them, a paper needs to be a complete, rigorous proof. Even when they have a result solidly plotted out in their head, when they’re sure they can prove something and they know what the proof needs to “look like”, actually getting the details right takes quite a lot of work.

Physicists don’t have quite the same standards of rigor, but we have a similar paper-writing experience. Often, trying to make our work clear raises novel questions. As we write, we try to put ourselves in the mind of a potential reader. Sometimes our imaginary reader is content and quiet. Other times, though, they object:

“Does this really work for all cases? What about this one? Did you make sure you can’t do this, or are you just assuming? Where does that pattern come from?”

Addressing those objections requires more work, more calculations. Sometimes, it becomes clear we don’t really understand our results at all! The paper takes a new direction, flows with new work to a new, truer message, one we wouldn’t have discovered if we didn’t sit down and try to write it out.

At Least One Math Term That Makes Sense

I’ve complained before about how mathematicians name things. Mathematicans seem to have a knack for taking an ordinary bland word that’s almost indistinguishable from the other ordinary, bland words they’ve used before and assigning it an incredibly specific mathematical concept. Varieties and forms, motives and schemes, in each case you end up wishing they picked a word that was just a little more descriptive.

Sometimes, though, a word may seem completely out of place when it actually has a fairly reasonable explanation. Such is the case for the word “period“.

Suppose you want to classify numbers. You have the integers, and the rational numbers. A bigger class of numbers are “algebraic”, in that you can get them “from algebra”: more specifically, as solutions of polynomial equations with rational coefficients. Numbers that aren’t algebraic are “transcendental”, a popular example being \pi.

Periods lie in between: a set that contains algebraic numbers, but also many of the transcendental numbers. They’re numbers you can get, not from algebra, but from calculus: they’re integrals over rational functions. These numbers were popularized by Kontsevich and Zagier, and they’ve led to a lot of fruitful inquiry in both math and physics.

But why the heck are they called periods?

Think about e^{i x}.

euler13

Or if you prefer, think about a circle

e^{i x} is a periodic function, with period 2\pi.  Take x from 0 to 2\pi and the function repeats, you’ve traveled in a circle.

Thought of another way, 2\pi is the volume of the circle. It’s the integral, around the circle, of \frac{dz}{z}. And that integral nicely matches Kontsevich and Zagier’s definition of a period.

The idea of a period, then, comes from generalizing this. What happens when you only go partway around the circle, to some point z in the complex plane? Then you need to go to a point x=-i \ln z. So a logarithm can also be thought of as measuring the period of e^{i x}. And indeed, since a logarithm can be expressed as \int\frac{dz}{z}, they count as periods in the Kontsevich-Zagier sense.

Starting there, you can loosely think about the polylogarithm functions I like to work with as collections of logs, measuring periods of interlocking circles.

And if you need to go beyond polylogarithms, when you can’t just go circle by circle?

Then you need to think about functions with two periods, like Weierstrass’s elliptic function. Just as you can think about e^{i x} as a circle, you can think of Weierstrass’s function in terms of a torus.

torus_1000

Obligatory donut joke here

The torus has two periods, corresponding to the two circles you can draw around it. The periods of Weierstrass’s function are transcendental numbers, and they fit Kontsevich and Zagier’s definition of periods. And if you take the inverse of Weierstrass’s function, you get an elliptic integral, just like taking the inverse of e^{i x} gives a logarithm.

So mathematicians, I apologize. Periods, at least, make sense.

I’m still mad about “varieties” though.