Tag Archives: amplitudes

At Least One Math Term That Makes Sense

I’ve complained before about how mathematicians name things. Mathematicans seem to have a knack for taking an ordinary bland word that’s almost indistinguishable from the other ordinary, bland words they’ve used before and assigning it an incredibly specific mathematical concept. Varieties and forms, motives and schemes, in each case you end up wishing they picked a word that was just a little more descriptive.

Sometimes, though, a word may seem completely out of place when it actually has a fairly reasonable explanation. Such is the case for the word “period“.

Suppose you want to classify numbers. You have the integers, and the rational numbers. A bigger class of numbers are “algebraic”, in that you can get them “from algebra”: more specifically, as solutions of polynomial equations with rational coefficients. Numbers that aren’t algebraic are “transcendental”, a popular example being \pi.

Periods lie in between: a set that contains algebraic numbers, but also many of the transcendental numbers. They’re numbers you can get, not from algebra, but from calculus: they’re integrals over rational functions. These numbers were popularized by Kontsevich and Zagier, and they’ve led to a lot of fruitful inquiry in both math and physics.

But why the heck are they called periods?

Think about e^{i x}.


Or if you prefer, think about a circle

e^{i x} is a periodic function, with period 2\pi.  Take x from 0 to 2\pi and the function repeats, you’ve traveled in a circle.

Thought of another way, 2\pi is the volume of the circle. It’s the integral, around the circle, of \frac{dz}{z}. And that integral nicely matches Kontsevich and Zagier’s definition of a period.

The idea of a period, then, comes from generalizing this. What happens when you only go partway around the circle, to some point z in the complex plane? Then you need to go to a point x=-i \ln z. So a logarithm can also be thought of as measuring the period of e^{i x}. And indeed, since a logarithm can be expressed as \int\frac{dz}{z}, they count as periods in the Kontsevich-Zagier sense.

Starting there, you can loosely think about the polylogarithm functions I like to work with as collections of logs, measuring periods of interlocking circles.

And if you need to go beyond polylogarithms, when you can’t just go circle by circle?

Then you need to think about functions with two periods, like Weierstrass’s elliptic function. Just as you can think about e^{i x} as a circle, you can think of Weierstrass’s function in terms of a torus.


Obligatory donut joke here

The torus has two periods, corresponding to the two circles you can draw around it. The periods of Weierstrass’s function are transcendental numbers, and they fit Kontsevich and Zagier’s definition of periods. And if you take the inverse of Weierstrass’s function, you get an elliptic integral, just like taking the inverse of e^{i x} gives a logarithm.

So mathematicians, I apologize. Periods, at least, make sense.

I’m still mad about “varieties” though.


Tutoring at GGI

I’m still at the Galileo Galilei Institute this week, tutoring at the winter school.

At GGI’s winter school, each week is featuring a pair of lecturers. This week, the lectures alternate between Lance Dixon covering the basics of amplitudeology and Csaba Csaki, discussing ways in which the Higgs could be a composite made up of new fundamental particles.

Most of the students at this school are phenomenologists, physicists who make predictions for particle physics. I’m an amplitudeologist, I study the calculation tools behind those predictions. You’d think these would be very close areas, but it’s been interesting seeing how different our approaches really are.

Some of the difference is apparent just from watching the board. In Csaki’s lectures, the equations that show up are short, a few terms long at most. When amplitudes show up, it’s for their general properties: how many factors of the coupling constant, or the multipliers that show up with loops. There aren’t any long technical calculations, and in general they aren’t needed: he’s arguing about the kinds of physics that can show up, not the specifics of how they give rise to precise numbers.

In contrast, Lance’s board filled up with longer calculations, each with many moving parts. Even things that seem simple from our perspective take a decent amount of board space to derive, and involve no small amount of technical symbol-shuffling. For most of the students, working out an amplitude this complicated was an unfamiliar experience. There are a few applications for which you need the kind of power that amplitudeology provides, and a few students were working on them. For the rest, it was a bit like learning about a foreign culture, an exercise in understanding what other people are doing rather than picking up a new skill themselves. Still, they made a strong go at it, and it was enlightening to see the pieces that ended up mattering to them, and to hear the kinds of questions they asked.

At the GGI Lectures on the Theory of Fundamental Interactions

I’m at the Galileo Galilei Institute for Theoretical Physics in Florence at their winter school, the GGI Lectures on the Theory of Fundamental Interactions. Next week I’ll be helping Lance Dixon teach Amplitudeology, this week, I’m catching the tail end of Ira Rothstein’s lectures.


The Galileo Galilei Institute, at the end of a long, winding road filled with small, speedy cars and motorcycles, in classic Italian fashion

Rothstein has been heavily involved in doing gravitational wave calculations using tools from quantum field theory, something that has recently captured a lot of interest from amplitudes people. Specifically, he uses Effective Field Theory, theories that are “effectively” true at some scale but hide away higher-energy physics. In the case of gravitational waves, these theories are a powerful way to calculate the waves that LIGO and VIRGO can observe without using the full machinery of general relativity.

After seeing Rothstein’s lectures, I’m reminded of something he pointed out at the QCD Meets Gravity conference in December. He emphasized then that even if amplitudes people get very good at drawing diagrams for classical general relativity, that won’t be the whole story: there’s a series of corrections needed to “match” between the quantities LIGO is able to see and the ones we’re able to calculate. Different methods incorporate these corrections in different ways, and the most intuitive approach for us amplitudes folks may still end up cumbersome once all the corrections are included. In typical amplitudes fashion, this just makes me wonder if there’s a shortcut: some way to compute, not just a piece that gets plugged in to an Effective Field Theory story, but the waves LIGO sees in one fell swoop (or at least, the part where gravity is weak enough that our methods are still useful). That’s probably a bit naive of me, though.

Proofs and Insight

Hearing us talking about the Amplituhedron, the professor across the table chimed in.

“The problem with you amplitudes people, I never know what’s a conjecture and what’s proven. The Amplituhedron, is that still a conjecture?”

The Amplituhedron, indeed, is still a conjecture (although a pretty well-supported one at this point). After clearing that up, we got to talking about the role proofs play in theoretical physics.

The professor was worried that we weren’t being direct enough in stating which ideas in amplitudes had been proven. While I agreed that we should be clearer, one of his points stood out to me: he argued that one benefit of clearly labeling conjectures is that it motivates people to go back and prove things. That’s a good thing to do in general, to be sure that your conjecture is really true, but often it has an added benefit: even if you’re pretty sure your conjecture is true, proving it can show you why it’s true, leading to new and valuable insight.

There’s a long history of important physics only becoming clear when someone took the time to work out a proof. But in amplitudes right now, I don’t think our lack of proofs is leading to a lack of insight. That’s because the kinds of things we’d like to prove often require novel insight themselves.

It’s not clear what it would take to prove the Amplituhedron. Even if you’ve got a perfectly clear, mathematically nice definition for it, you’d still need to prove that it does what it’s supposed to do: that it really calculates scattering amplitudes in N=4 super Yang-Mills. In order to do that, you’d need a very complete understanding of how those calculations work. You’d need to be able to see how known methods give rise to something like the Amplituhedron, or to find the Amplituhedron buried deep in the structure of the theory.

If you had that kind of insight? Then yeah, you could prove the Amplituhedron, and accomplish remarkable things along the way. But more than that, if you had that sort of insight, you would prove the Amplituhedron. Even if you didn’t know about the Amplituhedron to begin with, or weren’t sure whether or not it was a conjecture, once you had that kind of insight proving something like the Amplituhedron would be the inevitable next step. The signpost, “this is a conjecture” is helpful for other reasons, but it doesn’t change circumstances here: either you have what you need, or you don’t.

This contrasts with how progress works in other parts of physics, and how it has worked at other times. Sometimes, a field is moving so fast that conjectures get left by the wayside, even when they’re provable. You get situations where everyone busily assumes something is true and builds off it, and no-one takes the time to work out why. In that sort of field, it can be really valuable to clearly point out conjectures, so that someone gets motivated to work out the proof (and to hopefully discover something along the way).

I don’t think amplitudes is in that position though. It’s still worthwhile to signal our conjectures, to make clear what needs a proof and what doesn’t. But our big conjectures, like the Amplituhedron, aren’t the kind of thing someone can prove just by taking some time off and working on it. They require new, powerful insight. Because of that, our time is typically best served looking for that insight, finding novel examples and unusual perspectives that clear up what’s really going on. That’s a fair bit broader an activity than just working out a proof.

4gravitons Meets QCD Meets Gravity

I’m at UCLA this week, for the workshop QCD Meets Gravity. I haven’t worked on QCD or gravity yet, so I’m mostly here as an interested observer, and as an excuse to enjoy Los Angeles in December.


I think there’s a song about this…

QCD Meets Gravity is a conference centered around the various ways that “gravity is Yang-Mills squared”. There are a number of tricks that let you “square” calculations in Yang-Mills theories (a type of theory that includes QCD) to get calculations in gravity, and this conference showcased most of them.

At Amplitudes this summer, I was disappointed there were so few surprises. QCD Meets Gravity was different, with several talks on new or preliminary results, including one by Julio Parra-Martinez where the paper went up in the last few minutes of the talk! Yu-tin Huang talked about his (still-unpublished) work with Nima Arkani-Hamed on “UV/IR Polytopes”. The story there is a bit like the conformal bootstrap, with constraints (in this case based on positivity) marking off a space of “allowed” theories. String theory, interestingly, is quite close to the boundary of what is allowed. Enrico Herrmann is working on a way to figure out which gravity integrands are going to diverge without actually integrating them, while Simon Caron-Huot, in his characteristic out-of-the-box style, is wondering whether supersymmetric black holes precess. We also heard a bit more about a few recent papers. Oliver Schlotterer’s talk cleared up one thing: apparently the GEF functions he defines in his paper on one-loop “Z theory” are pronounced “Jeff”. I kept waiting for him to announce “Jeff theory”, but unfortunately no such luck. Sebastian Mizera’s talk was a very clear explanation of intersection theory, the subject of his recent paper. As it turns out, intersection theory is the study of mathematical objects like the Beta function (which shows up extensively in string theory), taking them apart in a way very reminiscent of the “squaring” story of Yang-Mills and gravity.

The heart of the workshop this year was gravitational waves. Since LIGO started running, amplitudes researchers (including, briefly, me) have been looking for ways to get involved. This conference’s goal was to bring together amplitudes people and the gravitational wave community, to get a clearer idea of what we can contribute. Between talks and discussions, I feel like we all understand the problem better. Some things that the amplitudes community thought were required, like breaking the symmetries of special relativity, turn out to be accidents of how the gravitational wave community calculates things: approximations that made things easier for them, but make things harder for us. There are areas in which we can make progress quite soon, even areas in which amplitudes people have already made progress. The detectors for which the new predictions matter might still be in the future (LIGO can measure two or three “loops”, LISA will see up to four), but they will eventually be measured. Amplitudes and gravitational wave physics could turn out to be a very fruitful partnership.


An Elliptical Workout

I study scattering amplitudes, probabilities that particles scatter off each other.

In particular, I’ve studied them using polylogarithmic functions. Polylogarithmic functions can be taken apart into “logs”, which obey identities much like logarithms do. They’re convenient and nice, and for my favorite theory of N=4 super Yang-Mills they’re almost all you need.

Well, until ten particles get involved, anyway.

That’s when you start needing elliptic integrals, and elliptic polylogarithms. These integrals substitute one of the “logs” of a polylogarithm with an integration over an elliptic curve.

And with Jacob Bourjaily, Andrew McLeod, Marcus Spradlin, and Matthias Wilhelm, I’ve now computed one.


This one, to be specific

Our paper, The Elliptic Double-Box Integral, went up on the arXiv last night.

The last few weeks have been a frenzy of work, finishing up our calculations and writing the paper. It’s the fastest I’ve ever gotten a paper out, which has been a unique experience.

Computing this integral required new, so far unpublished tricks by Jake Bourjaily, as well as some rather powerful software and Mark Spradlin’s extensive expertise in simplifying polylogarithms. In the end, we got the integral into a “canonical” form, one other papers had proposed as the right way to represent it, with the elliptic curve in a form standardized by Weierstrass.

One of the advantages of fixing a “canonical” form is that it should make identities obvious. If two integrals are actually the same, then writing them according to the same canonical rules should make that clear. This is one of the nice things about polylogarithms, where these identities are really just identities between logs and the right form is comparatively easy to find.

Surprisingly, the form we found doesn’t do this. We can write down an integral in our “canonical” form that looks different, but really is the same as our original integral. The form other papers had suggested, while handy, can’t be the final canonical form.

What the final form should be, we don’t yet know. We have some ideas, but we’re also curious what other groups are thinking. We’re relatively new to elliptic integrals, and there are other groups with much more experience with them, some with papers coming out soon. As far as we know they’re calculating slightly different integrals, ones more relevant for the real world than for N=4 super Yang-Mills. It’s going to be interesting seeing what they come up with. So if you want to follow this topic, don’t just watch for our names on the arXiv: look for Claude Duhr and Falko Dulat, Luise Adams and Stefan Weinzierl. In the elliptic world, big things are coming.

Interesting Work at the IAS

I’m visiting the Institute for Advanced Study this week, on the outskirts of Princeton’s impressively Gothic campus.


A typical Princeton reading room

The IAS was designed as a place for researchers to work with minimal distraction, and we’re taking full advantage of it. (Though I wouldn’t mind a few more basic distractions…dinner closer than thirty minutes away for example.)

The amplitudes community seems to be busily working as well, with several interesting papers going up on the arXiv this week, four with some connection to the IAS.

Carlos Mafra and Oliver Schlotterer’s paper about one-loop string amplitudes mentions visiting the IAS in the acknowledgements. Mafra and Schlotterer have found a “double-copy” structure in the one-loop open string. Loosely, “double-copy” refers to situations in which one theory can be described as two theories “multiplied together”, like how “gravity is Yang-Mills squared”. Normally, open strings would be the “Yang-Mills” in that equation, with their “squares”, closed strings, giving gravity. Here though, open strings themselves are described as a “product” of two different pieces, a Yang-Mills part and one that takes care of the “stringiness”. You may remember me talking about something like this and calling it “Z theory”. That was at “tree level”, for the simplest string diagrams. This paper updates the technology to one-loop, where the part taking care of the “stringiness” has a more sophisticated mathematical structure. It’s pretty nontrivial for this kind of structure to survive at one loop, and it suggests something deeper is going on.

Yvonne Geyer (IAS) and Ricardo Monteiro (non-IAS) work on the ambitwistor string, a string theory-like setup for calculating particle physics amplitudes. Their paper shows how this setup can be used for one-loop amplitudes in a wide range of theories, in particular theories without supersymmetry. This makes some patterns that were observed before quite a bit clearer, and leads to a fairly concise way of writing the amplitudes.

Nima-watchers will be excited about a paper by Nima Arkani-Hamed and his student Yuntao Bai (IAS) and Song He and his student Gongwang Yan (non-IAS). This paper is one that has been promised for quite some time, Nima talked about it at Amplitudes last summer. Nima is famous for the amplituhedron, an abstract geometrical object that encodes amplitudes in one specific theory, N=4 super Yang-Mills. Song He is known for the Cachazo-He-Yuan (or CHY) string, a string-theory like picture of particle scattering in a very general class of theories that is closely related to the ambitwistor string. Collaborating, they’ve managed to link the two pictures together, and in doing so take the first step to generalizing the amplituhedron to other theories. In order to do this they had to think about the amplituhedron not in terms of some abstract space, but in terms of the actual momenta of the particles they’re colliding. This is important because the amplituhedron’s abstract space is very specific to N=4 super Yang-Mills, with supersymmetry in some sense built in, while momenta can be written down for any particles. Once they had mastered this trick, they could encode other things in this space of momenta: colors of quarks, for example. Using this, they’ve managed to find amplituhedron-like structure in the CHY string, and in a few particular theories. They still can’t do everything the amplituhedron can, in particular the amplituhedron can go to any number of loops while the structures they’re finding are tree-level. But the core trick they’re using looks very powerful. I’ve been hearing hints about the trick from Nima for so long that I had forgotten they hadn’t published it yet, now that they have I’m excited to see what the amplitudes community manages to do with it.

Finally, last night a paper by Igor Prlina, Marcus Spradlin, James Stankowicz, Stefan Stanojevic, and Anastasia Volovich went up while three of the authors were visiting the IAS. The paper deals with Landau equations, a method to classify and predict the singularities of amplitudes. By combining this method with the amplituhedron they’ve already made substantial progress, and this paper serves as a fairly thorough proof of principle, using the method to comprehensively catalog the singularities of one-loop amplitudes. In this case I’ve been assured that they have papers at higher loops in the works, so it will be interesting to see how powerful this method ends up being.