Category Archives: Amplitudes Methods

Amplitudes 2018

This week, I’m at Amplitudes, my field’s big yearly conference. The conference is at SLAC National Accelerator Laboratory this year, a familiar and lovely place.

IMG_20180620_183339441_HDR

Welcome to the Guest House California

It’s been a packed conference, with a lot of interesting talks. Recording and slides of most of them should be up at this point, for those following at home. I’ll comment on a few that caught my attention, I might do a more in-depth post later.

The first morning was dedicated to gravitational waves. At the QCD Meets Gravity conference last December I noted that amplitudes folks were very eager to do something relevant to LIGO, but that it was still a bit unclear how we could contribute (aside from Pierpaolo Mastrolia, who had already figured it out). The following six months appear to have cleared things up considerably, and Clifford Cheung and Donal O’Connel’s talks laid out quite concrete directions for this kind of research.

I’d seen Erik Panzer talk about the Hepp bound two weeks ago at Les Houches, but that was for a much more mathematically-inclined audience. It’s been interesting seeing people here start to see the implications: a simple method to classify and estimate (within 1%!) Feynman integrals could be a real game-changer.

Brenda Penante’s talk made me rethink a slogan I like to quote, that N=4 super Yang-Mills is the “most transcendental” part of QCD. While this is true in some cases, in many ways it’s actually least true for amplitudes, with quite a few counterexamples. For other quantities (like the form factors that were the subject of her talk) it’s true more often, and it’s still unclear when we should expect it to hold, or why.

Nima Arkani-Hamed has a reputation for talks that end up much longer than scheduled. Lately, it seems to be due to the sheer number of projects he’s working on. He had to rush at the end of his talk, which would have been about cosmological polytopes. I’ll have to ask his collaborator Paolo Benincasa for an update when I get back to Copenhagen.

Tuesday afternoon was a series of talks on the “NNLO frontier”, two-loop calculations that form the state of the art for realistic collider physics predictions. These talks brought home to me that the LHC really does need two-loop precision, and that the methods to get it are still pretty cumbersome. For those of us off in the airy land of six-loop N=4 super Yang-Mills, this is the challenge: can we make what these people do simpler?

Wednesday cleared up a few things for me, from what kinds of things you can write down in “fishnet theory” to how broad Ashoke Sen’s soft theorem is, to how fast John Joseph Carrasco could show his villanelle slide. It also gave me a clearer idea of just what simplifications are available for pushing to higher loops in supergravity.

Wednesday was also the poster session. It keeps being amazing how fast the field is growing, the sheer number of new faces was quite inspiring. One of those new faces pointed me to a paper I had missed, suggesting that elliptic integrals could end up trickier than most of us had thought.

Thursday featured two talks by people who work on the Conformal Bootstrap, one of our subfield’s closest relatives. (We’re both “bootstrappers” in some sense.) The talks were interesting, but there wasn’t a lot of engagement from the audience, so if the intent was to make a bridge between the subfields I’m not sure it panned out. Overall, I think we’re mostly just united by how we feel about Simon Caron-Huot, who David Simmons-Duffin described as “awesome and mysterious”. We also had an update on attempts to extend the Pentagon OPE to ABJM, a three-dimensional analogue of N=4 super Yang-Mills.

I’m looking forward to Friday’s talks, promising elliptic functions among other interesting problems.

Advertisements

Quelques Houches

For the last two weeks I’ve been at Les Houches, a village in the French Alps, for the Summer School on Structures in Local Quantum Field Theory.

IMG_20180614_104537425

To assist, we have a view of some very large structures in local quantum field theory

Les Houches has a long history of prestigious summer schools in theoretical physics, going back to the activity of Cécile DeWitt-Morette after the second world war. This was more of a workshop than a “school”, though: each speaker gave one talk, and they weren’t really geared for students.

The workshop was organized by Dirk Kreimer and Spencer Bloch, who both have a long track record of work on scattering amplitudes with a high level of mathematical sophistication. The group they invited was an even mix of physicists interested in mathematics and mathematicians interested in physics. The result was a series of talks that managed to both be thoroughly technical and ask extremely deep questions, including “is quantum electrodynamics really an asymptotic series?”, “are there simple graph invariants that uniquely identify Feynman integrals?”, and several talks about something called the Spine of Outer Space, which still sounds a bit like a bad sci-fi novel. Along the way there were several talks showcasing the growing understanding of elliptic polylogarithms, giving me an opportunity to quiz Johannes Broedel about his recent work.

While some of the more mathematical talks went over my head, they spurred a lot of productive dialogues between physicists and mathematicians. Several talks had last-minute slides, added as a result of collaborations that happened right there at the workshop. There was even an entire extra talk, by David Broadhurst, based on work he did just a few days before.

We also had a talk by Jaclyn Bell, a former student of one of the participants who was on a BBC reality show about training to be an astronaut. She’s heavily involved in outreach now, and honestly I’m a little envious of how good she is at it.

An Omega for Every Alpha

In particle physics, we almost always use approximations.

Often, we assume the forces we consider are weak. We use a “coupling constant”, some number written g or a or \alpha, and we assume it’s small, so \alpha is greater than \alpha^2 is greater than \alpha^3. With this assumption, we can start drawing Feynman diagrams, and each “loop” we add to the diagram gives us a higher power of \alpha.

If \alpha isn’t small, then the trick stops working, the diagrams stop making sense, and we have to do something else.

Except for some times, when everything keeps working fine. This week, along with Simon Caron-Huot, Lance Dixon, Andrew McLeod, and Georgios Papathanasiou, I published what turned out to be a pretty cute example.

omegapic

We call this fellow \Omega. It’s a family of diagrams that we can write down for any number of loops: to get more loops, just extend the “…”, adding more boxes in the middle. Count the number of lines sticking out, and you get six: these are “hexagon functions”, the type of function I’ve used to calculate six-particle scattering in N=4 super Yang-Mills.

The fun thing about \Omega is that we don’t have to think about it this way, one loop at a time. We can add up all the loops, \alpha times one loop plus \alpha^2 times two loops plus \alpha^3 times three loops, all the way up to infinity. And we’ve managed to figure out what those loops sum to.

omegaeqnpic

The result ends up beautifully simple. This formula isn’t just true for small coupling constants, it’s true for any number you care to plug in, making the forces as strong as you’d like.

We can do this with \Omega because we have equations relating different loops together. Solving those equations with a few educated guesses, we can figure out the full sum. We can also go back, and use those equations to take the \Omegas at each loop apart, finding a basis of functions needed to describe them.

That basis is the real reward here. It’s not the full basis of “hexagon functions”: if you wanted to do a full six-particle calculation, you’d need more functions than the ones \Omega is made of. What it is, though, is a basis we can describe completely, stating exactly what it’s made of for any number of loops.

We can’t do that with the hexagon functions, at least not yet: we have to build them loop by loop, one at a time before we can find the next ones. The hope, though, is that we won’t have to do this much longer. The \Omega basis covers some of the functions we need. Our hope is that other nice families of diagrams can cover the rest. If we can identify more functions like \Omega, things that we can sum to any number of loops, then perhaps we won’t have to think loop by loop anymore. If we know the right building blocks, we might be able to guess the whole amplitude, to find a formula that works for any \alpha you’d like.

That would be a big deal. N=4 super Yang-Mills isn’t the real world, but it’s complicated in some of the same ways. If we can calculate there without approximations, it should at least give us an idea of what part of the real-world answer can look like. And for a field that almost always uses approximations, that’s some pretty substantial progress.

Be Rational, Integrate Our Way!

I’ve got another paper up this week with Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, about integrating Feynman diagrams.

If you’ve been following this blog for a while, you might be surprised: most of my work avoids Feynman diagrams at all costs. I’ve changed my mind, in part, because it turns out integrating Feynman diagrams can be a lot easier than I had thought.

At first, I thought Feynman integrals would be hard purely because they’re integrals. Those of you who’ve taken calculus might remember that, while taking derivatives was just a matter of following the rules, doing integrals required a lot more thought. Rather than one set of instructions, you had a set of tricks, meant to try to match your integral to the derivative of some known function. Sometimes the tricks worked, sometimes you just ended up completely lost.

As it turns out, that’s not quite the problem here. When I integrate a Feynman diagram, most of the time I’m expecting a particular kind of result, called a polylogarithm. If you know that’s the end goal, then you really can just follow the rules, using partial-fractioning to break your integral up into simpler integrations, linear pieces that you can match to the definition of polylogarithms. There are even programs that do this for you: Erik Panzer’s HyperInt is an especially convenient one.

maplewhining

Or it would be convenient, if Maple’s GUI wasn’t cursed…

Still, I wouldn’t have expected Feynman integrals to work particularly well, because they require too many integrations. You need to integrate a certain number of times to define a polylogarithm: for the ones we get out of Feynman diagrams, it’s two integrations for each loop the diagram has. The usual ways we calculate Feynman diagrams lead to a lot more integrations: the systematic method, using something called Symanzik polynomials, involves one integration per particle line in the diagram, which usually adds up to a lot more than two per loop.

When I arrived at the Niels Bohr Institute, I assumed everyone in my field knew about Symanzik polynomials. I was surprised when it turned out Jake Bourjaily hadn’t even heard of them. He was integrating Feynman diagrams by what seemed like a plodding, unsystematic method, taking the intro example from textbooks and just applying it over and over, gaining no benefit from all of the beautiful graph theory that goes into the Symanzik polynomials.

I was even more surprised when his method turned out to be the better one.

Avoid Symanzik polynomials, and you can manage with a lot fewer integrations. Suddenly we were pretty close to the “two integrations per loop” sweet spot, with only one or two “extra” integrations to do.

A few more advantages, and Feynman integrals were actually looking reasonable. The final insight came when we realized that just writing the problem in the right variables made a huge difference.

HyperInt, as I mentioned, tries to break a problem up into simpler integrals. Specifically, it’s trying to make things linear in the integration variable. In order to do this, sometimes it has to factor quadratic polynomials, like so:

partialfractionformula

Notice the square roots in this formula? Those can make your life a good deal trickier. Once you’ve got irrational functions in the game, HyperInt needs extra instructions for how to handle them, and integration is a lot more cumbersome.

The last insight, then, and the key point in our paper, is to avoid irrational functions. To do that, we use variables that rationalize the square roots.

We get these variables from one of the mainstays of our field, called momentum twistors. These variables are most useful in our favorite theory of N=4 super Yang-Mills, but they’re useful in other contexts too. By parametrizing them with a good “chart”, one with only the minimum number of variables we need to capture the integral, we can rationalize most of the square roots we encounter.

That “most” is going to surprise some people. We rationalized all of the expected square roots, letting us do integrals all the way to four loops in a few cases. But there were some unexpected square roots, and those we couldn’t rationalize.

These unexpected square roots don’t just make our life more complicated, if they stick around in a physically meaningful calculation they’ll upset a few other conjectures as well. People had expected that these integrals were made of certain kinds of “letters”, organized by a mathematical structure called a cluster algebra. That cluster algebra structure doesn’t have room for square roots, which suggests that it can’t be the full story here.

The integrals that we can do, though, with no surprise square roots? They’re much easier than anyone expected, much easier than with any other method. Rather than running around doing something fancy, we just integrated things the simple, rational way…and it worked!

Calabi-Yaus for Higgs Phenomenology

less joking title:

You Didn’t Think We’d Stop at Elliptics, Did You?

When calculating scattering amplitudes, I like to work with polylogarithms. They’re a very well-understood type of mathematical function, and thus pretty easy to work with.

Even for our favorite theory of N=4 super Yang-Mills, though, they’re not the whole story. You need other types of functions to represent amplitudes, elliptic polylogarithms that are only just beginning to be properly understood. We had our own modest contribution to that topic last year.

You can think of the difference between these functions in terms of more and more complicated curves. Polylogarithms just need circles or spheres, elliptic polylogarithms can be described with a torus.

A torus is far from the most complicated curve you can think of, though.

983px-calabi_yau_formatted-svgString theorists have done a lot of research into complicated curves, in particular ones with a property called Calabi-Yau. They were looking for ways to curl up six or seven extra dimensions, to get down to the four we experience. They wanted to find ways of curling that preserved some supersymmetry, in the hope that they could use it to predict new particles, and it turned out that Calabi-Yau was the condition they needed.

That hope, for the most part, didn’t pan out. There were too many Calabi-Yaus to check, and the LHC hasn’t seen any supersymmetric particles. Today, “string phenomenologists”, who try to use string theory to predict new particles, are a relatively small branch of the field.

This research did, however, have lasting impact: due to string theorists’ interest, there are huge databases of Calabi-Yau curves, and fruitful dialogues with mathematicians about classifying them.

This has proven quite convenient for us, as we happen to have some Calabi-Yaus to classify.

traintrackpic

Our midnight train going anywhere…in the space of Calabi-Yaus

We call Feynman diagrams like the one above “traintrack integrals”. With two loops, it’s the elliptic integral we calculated last year. With three, though, you need a type of Calabi-Yau curve called a K3. With four loops, it looks like you start needing Calabi-Yau three-folds, the type of space used to compactify string theory to four dimensions.

“We” in this case is myself, Jacob Bourjaily, Andrew McLeod, Matthias Wilhelm, and Yang-Hui He, a Calabi-Yau expert we brought on to help us classify these things. Our new paper investigates these integrals, and the more and more complicated curves needed to compute them.

Calabi-Yaus had been seen in amplitudes before, in diagrams called “sunrise” or “banana” integrals. Our example shows that they should occur much more broadly. “Traintrack” integrals appear in our favorite N=4 super Yang-Mills theory, but they also appear in theories involving just scalar fields, like the Higgs boson. For enough loops and particles, we’re going to need more and more complicated functions, not just the polylogarithms and elliptic polylogarithms that people understand.

(And to be clear, no, nobody needs to do this calculation for Higgs bosons in practice. This diagram would calculate the result of two Higgs bosons colliding and producing ten or more Higgs bosons, all at energies so high you can ignore their mass, which is…not exactly relevant for current collider phenomenology. Still, the title proved too tempting to resist.)

Is there a way to understand traintrack integrals like we understand polylogarithms? What kinds of Calabi-Yaus do they pick out, in the vast space of these curves? We’d love to find out. For the moment, we just wanted to remind all the people excited about elliptic polylogarithms that there’s quite a bit more strangeness to find, even if we don’t leave the tracks.

The Amplitudes Long View

Occasionally, other physicists ask me what the goal of amplitudes research is. What’s it all about?

I want to give my usual answer: we’re calculating scattering amplitudes! We’re trying to compute them more efficiently, taking advantage of simplifications and using a big toolbox of different approaches, and…

Usually by this point in the conversation, it’s clear that this isn’t what they were asking.

When physicists ask me about the goal of amplitudes research, they’ve got a longer view in mind. Maybe they’ve seen a talk by Nima Arkani-Hamed, declaring that spacetime is doomed. Maybe they’ve seen papers arguing that everything we know about quantum field theory can be derived from a few simple rules. Maybe they’ve heard slogans, like “on-shell good, off-shell bad”. Maybe they’ve heard about the conjecture that N=8 supergravity is finite, or maybe they’ve just heard someone praise the field as “demoting the sacred cows like fields, Lagrangians, and gauge symmetry”.

Often, they’ve heard a little bit of all of these. Sometimes they’re excited, sometimes they’re skeptical, but either way, they’re usually more than a little confused. They’re asking how all of these statements fit into a larger story.

The glib answer is that they don’t. Amplitudes has always been a grab-bag of methods: different people with different backgrounds, united by their interest in a particular kind of calculation.

With that said, I think there is a shared philosophy, even if each of us approaches it a little differently. There is an overall principle that unites the amplituhedron and color-kinematics duality, the CHY string and bootstrap methods, BCFW and generalized unitarity.

If I had to describe that principle in one word, I’d call it minimality. Quantum field theory involves hugely complicated mathematical machinery: Lagrangians and path integrals, Feynman diagrams and gauge fixing. At the end of the day, if you want to answer a concrete question, you’re computing a few specific kinds of things: mostly, scattering amplitudes and correlation functions. Amplitudes tries to start from the other end, and ask what outputs of this process are allowed. The idea is to search for something minimal: a few principles that, when applied to a final answer in a particular form, specify it uniquely. The form in question varies: it can be a geometric picture like the amplituhedron, or a string-like worldsheet, or a constructive approach built up from three-particle amplitudes. The goal, in each case, is the same: to skip the usual machinery, and understand the allowed form for the answer.

From this principle, where do the slogans come from? How could minimality replace spacetime, or solve quantum gravity?

It can’t…if we stick to only matching quantum field theory. As long as each calculation matches one someone else could do with known theories, even if we’re more efficient, these minimal descriptions won’t really solve these kinds of big-picture mysteries.

The hope (and for the most part, it’s a long-term hope) is that we can go beyond that. By exploring minimal descriptions, the hope is that we will find not only known theories, but unknown ones as well, theories that weren’t expected in the old understanding of quantum field theory. The amplituhedron doesn’t need space-time, it might lead the way to a theory that doesn’t have space-time. If N=8 supergravity is finite, it could suggest new theories that are finite. The story repeats, with variations, whenever amplitudeologists explore the outlook of our field. If we know the minimal requirements for an amplitude, we could find amplitudes that nobody expected.

I’m not claiming we’re the only field like this: I feel like the conformal bootstrap could tell a similar story. And I’m not saying everyone thinks about our field this way: there’s a lot of deep mathematics in just calculating amplitudes, and it fascinated people long before the field caught on with the Princeton set.

But if you’re asking what the story is for amplitudes, the weird buzz you catch bits and pieces of and can’t quite put together…well, if there’s any unifying story, I think it’s this one.

The State of Four Gravitons

This blog is named for a question: does the four-graviton amplitude in N=8 supergravity diverge?

Over the years, Zvi Bern and a growing cast of collaborators have been trying to answer that question. They worked their way up, loop by loop, until they stalled at five loops. Last year, they finally broke the stall, and last week, they published the result of the five-loop calculation. They find that N=8 supergravity does not diverge at five loops in four dimensions, but does diverge in 24/5 dimensions. I thought I’d write a brief FAQ about the status so far.

Q: Wait a minute, 24/5 dimensions? What does that mean? Are you talking about fractals, or…

Nothing so exotic. The number 24/5 comes from a regularization trick. When we’re calculating an amplitude that might be divergent, one way to deal with it is to treat the dimension like a free variable. You can then see what happens as you vary the dimension, and see when the amplitude starts diverging. If the dimension is an integer, then this ends up matching a more physics-based picture, where you start with a theory in eleven dimensions and curl up the extra ones until you get to the dimension you’re looking for. For fractional dimensions, it’s not clear that there’s any physical picture like this: it’s just a way to talk about how close something is to diverging.

Q: I’m really confused. What’s a graviton? What is supergravity? What’s a divergence?

I don’t have enough space to explain these things here, but that’s why I write handbooks. Here are explanations of gravitons, supersymmetry, and (N=8) supergravity, loops, and divergences. Please let me know if anything in those explanations is unclear, or if you have any more questions.

Q: Why do people think that N=8 supergravity will diverge at seven loops?

There’s a useful rule of thumb in quantum field theory: anything that can happen, will happen. In this case, that means if there’s a way for a theory to diverge that’s consistent with the symmetries of the theory, then it almost always does diverge. In the past, that meant that people expected N=8 supergravity to diverge at five loops. However, researchers found a previously unknown symmetry that looked like it would forbid the five-loop divergence, and only allow a divergence at seven loops (in four dimensions). Zvi and co.’s calculation confirms that the five-loop divergence doesn’t show up.

More generally, string theory not only avoids divergences but clears up other phenomena, like black holes. These two things seem tied together: string theory cleans up problems in quantum gravity in a consistent, unified way. There isn’t a clear way for N=8 supergravity on its own to clean up these kinds of problems, which makes some people skeptical that it can match string theory’s advantages. Either way N=8 supergravity, unlike string theory, isn’t a candidate theory of nature by itself: it would need to be modified in order to describe our world, and no-one has suggested a way to do that.

Q: Why do people think that N=8 supergravity won’t diverge at seven loops?

There’s a useful rule of thumb in amplitudes: amplitudes are weird. In studying amplitudes we often notice unexpected simplifications, patterns that uncover new principles that weren’t obvious before.

Gravity in general seems to have a lot of these kinds of simplifications. Even without any loops, its behavior is surprisingly tame: it’s a theory that we can build up piece by piece from the three-particle interaction, even though naively we shouldn’t be able to (for the experts: I’m talking about large-z behavior in BCFW). This behavior seems to have an effect on one-loop amplitudes as well. There are other ways in which gravity seems better-behaved than expected, overall this suggests that we still have a fair ways to go before we understand all of the symmetries of gravity theories.

Supersymmetric gravity in particular also seems unusually well-behaved. N=5 supergravity was expected to diverge at four loops, but doesn’t. N=4 supergravity does diverge at four loops, but that seems to be due to an effect that is specific to that case (for the experts: an anomaly).

For N=8 specifically, a suggestive hint came from varying the dimension. If you checked the dimension in which the theory diverged at each loop, you’d find it matched the divergences of another theory, N=4 super Yang-Mills. At l loops, N=4 super Yang-Mills diverges in dimension 4+6/l. From that formula, you can see that no matter how much you increase l, you’ll never get to four dimensions: in four dimensions, N=4 super Yang-Mills doesn’t diverge.

At five loops, N=4 super Yang-Mills diverges in 26/5 dimensions. Zvi Bern made a bet with supergravity expert Kelly Stelle that the dimension would be the same for N=8 supergravity: a bottle of California wine from Bern versus English wine from Stelle. Now that they’ve found a divergence in 24/5 dimensions instead, Stelle will likely be getting his wine soon.

Q: It sounds like the calculation was pretty tough. Can they still make it to seven loops?

I think so, yes. Doing the five-loop calculation they noticed simplifications, clever tricks uncovered by even more clever grad students. The end result is that if they just want to find out whether the theory diverges then they don’t have to do the “whole calculation”, just part of it. This simplifies things a lot. They’ll probably have to find a few more simplifications to make seven loops viable, but I’m optimistic that they’ll find them, and in the meantime the new tricks should have some applications in other theories.

Q: What do you think? Will the theory diverge?

I’m not sure.

To be honest, I’m a bit less optimistic than I used to be. The agreement of divergence dimensions between N=8 supergravity and N=4 super Yang-Mills wasn’t the strongest argument (there’s a reason why, though Stelle accepted the bet on five loops, string theorist Michael Green is waiting on seven loops for his bet). Fractional dimensions don’t obviously mean anything physically, and many of the simplifications in gravity seem specific to four dimensions. Still, it was suggestive, the kind of “motivation” that gets a conjecture started.

Without that motivation, none of the remaining arguments are specific to N=8. I still think unexpected simplifications are likely, that gravity overall behaves better than we yet appreciate. I still would bet on seven loops being finite. But I’m less confident about what it would mean for the theory overall. That’s going to take more serious analysis, digging in to the anomaly in N=4 supergravity and seeing what generalizes. It does at least seem like Zvi and co. are prepared to undertake that analysis.

Regardless, it’s still worth pushing for seven loops. Having that kind of heavy-duty calculation in our sub-field forces us to improve our mathematical technology, in the same way that space programs and particle colliders drive technology in the wider world. If you think your new amplitudes method is more efficient than the alternatives, the push to seven loops is the ideal stress test. Jacob Bourjaily likes to tell me how his prescriptive unitarity technique is better than what Zvi and co. are doing, this is our chance to find out!

Overall, I still stand by what I say in my blog’s sidebar. I’m interested in N=8 supergravity, I’d love to find out whether the four-graviton amplitude diverges…and now that the calculation is once again making progress, I expect that I will.