Category Archives: Amplitudes Methods

Thoughts from the Winter School

There are two things I’d like to talk about this week.

First, as promised, I’ll talk about what I worked on at the PSI Winter School.

Freddy Cachazo and I study what are called scattering amplitudes. At first glance, these are probabilities that two subatomic particles scatter off each other, relevant for experiments like the Large Hadron Collider. In practice, though, they can calculate much more.

For example, let’s say you have two black holes circling each other, like the ones LIGO detected. Zoom out far enough, and you can think of each one as a particle. The two particle-black holes exchange gravitons, and those exchanges give rise to the force of gravity between them.

bhmerger_ligo_3600

In the end, it’s all just particle physics.

 

Based on that, we can use our favorite scattering amplitudes to make predictions for gravitational wave telescopes like LIGO.

There’s a bit of weirdness to this story, though, because these amplitudes don’t line up with predictions in quite the way we’re used to. The way we calculate amplitudes involves drawing diagrams, and those diagrams have loops. Normally, each “loop” makes the amplitude more quantum-mechanical. Only the diagrams with no loops (“tree diagrams”) come from classical physics alone.

(Here “classical physics” just means “not quantum”: I’m calling general relativity “classical”.)

For this problem, we only care about classical physics: LIGO isn’t sensitive enough to see quantum effects. The weird thing is, despite that, we still need loops.

(Why? This is a story I haven’t figured out how to tell in a non-technical way. The technical explanation has to do with the fact that we’re calculating a potential, not an amplitude, so there’s a Fourier transformation, and keeping track of the dimensions entails tossing around some factors of Planck’s constant. But I feel like this still isn’t quite the full story.)

So if we want to make predictions for LIGO, we want to compute amplitudes with loops. And as amplitudeologists, we should be pretty good at that.

As it turns out, plenty of other people have already had that idea, but there’s still room for improvement.

Our time with the students at the Winter School was limited, so our goal was fairly modest. We wanted to understand those other peoples’ calculations, and perhaps to think about them in a slightly cleaner way. In particular, we wanted to understand why “loops” are really necessary, and whether there was some way of understanding what the “loops” were doing in a more purely classical picture.

At this point, we feel like we’ve got the beginning of an idea of what’s going on. Time will tell whether it works out, and I’ll update you guys when we have a more presentable picture.


 

Unfortunately, physics wasn’t the only thing I was thinking about last week, which brings me to my other topic.

This blog has a fairly strong policy against talking politics. This is for several reasons. Partly, it’s because politics simply isn’t my area of expertise. Partly, it’s because talking politics tends to lead to long arguments in which nobody manages to learn anything. Despite this, I’m about to talk politics.

Last week, citizens of Iran, Iraq, Libya, Somalia, Sudan, Syria and Yemen were barred from entering the US. This included not only new visa applicants, but also those who already have visas or green cards. The latter group includes long-term residents of the US, many of whom were detained in airports and threatened with deportation when their flights arrived shortly after the ban was announced. Among those was the president of the Graduate Student Organization at my former grad school.

A federal judge has blocked parts of the order, and the Department of Homeland Security has announced that there will be case-by-case exceptions. Still, plenty of people are stuck: either abroad if they didn’t get in in time, or in the US, afraid that if they leave they won’t be able to return.

Politics isn’t in my area of expertise. But…

I travel for work pretty often. I know how terrifying and arbitrary border enforcement can be. I know how it feels to risk thousands of dollars and months of planning because some consulate or border official is having a bad day.

I also know how essential travel is to doing science. When there’s only one expert in the world who does the sort of work you need, you can’t just find a local substitute.

And so for this, I don’t need to be an expert in politics. I don’t need a detailed case about the risks of terrorism. I already know what I need to, and I know that this is cruel.

And so I stand in solidarity with the people who were trapped in airports, and those still trapped abroad and trapped in the US. You have been treated cruelly, and you shouldn’t have been. Hopefully, that sort of message can transcend politics.

 

One final thing: I’m going to be a massive hypocrite and continue to ban political comments on this blog. If you want to talk to me about any of this (and you think one or both of us might actually learn something from the exchange) please contact me in private.

Hexagon Functions Meet the Amplituhedron: Thinking Positive

I finished a new paper recently, it’s up on arXiv now.

This time, we’re collaborating with Jaroslav Trnka, of Amplituhedron fame, to investigate connections between the Amplituhedron and our hexagon function approach.

The Amplituhedron is a way to think about scattering amplitudes in our favorite toy model theory, N=4 super Yang-Mills. Specifically, it describes amplitudes as the “volume” of some geometric space.

Here’s something you might expect: if something is a volume, it should be positive, right? You can’t have a negative amount of space. So you’d naturally guess that these scattering amplitudes, if they’re really the “volume” of something, should be positive.

“Volume” is in quotation marks there for a reason, though, because the real story is a bit more complicated. The Amplituhedron isn’t literally the volume of some space, there are a bunch of other mathematical steps between the geometric story of the Amplituhedron on the one end and the final amplitude on the other. If it was literally a volume, calculating it would be quite a bit easier: mathematicians have gotten very talented at calculating volumes. But if it was literally a volume, it would have to be positive.

What our paper demonstrates is that, in the right regions (selected by the structure of the Amplituhedron), the amplitudes we’ve calculated so far are in fact positive. That first, basic requirement for the amplitude to actually literally be a volume is satisfied.

Of course, this doesn’t prove anything. There’s still a lot of work to do to actually find the thing the amplitude is the volume of, and this isn’t even proof that such a thing exists. It’s another, small piece of evidence. But it’s a reassuring one, and it’s nice to begin to link our approach with the Amplituhedron folks.

This week was the 75th birthday of John Schwarz, one of the founders of string theory and a discoverer of N=4 super Yang-Mills. We’ve dedicated the paper to him. His influence on the field, like the amplitudes of N=4 themselves, has been consistently positive.

Four Gravitons in China

I’m in China this week, at the School and Workshop on Amplitudes in Beijing 2016.

img_20161018_085714

It’s a little chilly this time of year, so the dragons have accessorized

A few years back, I mentioned that there didn’t seem to be many amplitudeologists in Asia. That’s changed quite a lot over just the last few years. Song He and Yu-tin Huang went from postdocs in the west to faculty positions in China and Taiwan, respectively, while Bo Feng’s group in China has expanded. As a consequence, there’s now a substantial community here. This is the third “Amplitudes in Asia” conference, with past years meeting in Hong Kong and Taipei.

The “school” part of the conference was last week. I wasn’t here, but the students here seem to have enjoyed it a lot. This week is the “workshop” part, and there have been talks on a variety of parts of amplitudes. Nima showed up on Wednesday and managed to talk for his usual impressively long amount of time, finishing with a public lecture about the future of physics. The talk was ostensibly about why China should build the next big collider, but for the most part it ended up as a more general talk about exciting open questions in high energy physics. The talks were recorded, so they should be online at some point.

Hexagon Functions IV: Steinmann Harder

It’s paper season! I’ve got another paper out this week, this one a continuation of the hexagon function story.

The story so far:

My collaborators and I have been calculating “six-particle” (two particles collide, four come out, or three collide, three come out…) scattering amplitudes (probabilities that particles scatter) in N=4 super Yang-Mills. We calculate them starting with an ansatz (a guess, basically) made up of a type of functions called hexagon functions: “hexagon” because they’re the right functions for six-particle scattering. We then narrow down our guess by bringing in other information: for example, if two particles are close to lining up, our answer needs to match the one calculated with something called the POPE, so we can throw out guesses that don’t match that. In the end, only one guess survives, and we can check that it’s the right answer.

So what’s new this time?

More loops:

In quantum field theory, most of our calculations are approximate, and we measure the precision in something called loops. The more loops, the closer we are to the exact result, and the more complicated the calculation becomes.

This time, we’re at five loops of precision. To give you an idea of how complicated that is: I store these functions in text files. We’ve got a new, more efficient notation for them. With that, the two-loop functions fit into files around 20KB. Three loops, 500KB. Four, 15MB. And five? 300MB.

So if you want to imagine five loops, think about something that needs to be stored in a 300MB text file.

More insight:

We started out having noticed some weird new symmetries of our old results, so we brought in Simon Caron-Huot, expert on weird new symmetries. He couldn’t figure out that one…but he did notice an entirely different symmetry, one that turned out to have been first noticed in the 60’s, called the Steinmann relations.

The core idea of the Steinmann relations goes back to the old method of calculating amplitudes, with Feynman diagrams. In Feynman diagrams, lines represent particles traveling from one part of the diagram to the other. In a simplified form, the Steinmann conditions are telling us that diagrams can’t take two mutually exclusive shapes at the same time. If three particles are going one way, they can’t also be going another way.

steinmann2

With the Steinmann relations, things suddenly became a whole lot easier. Calculations that we had taken months to do, Simon was now doing in a week. Finally we could narrow things down and get the full answer, and we could do it with clear, physics-based rules.

More bootstrap:

In physics, when we call something a “bootstrap” it’s in reference to the phrase “pull yourself up by your own boostraps”. That impossible task, lifting yourself  with no outside support, is essentially what we do when we “bootstrap”: we do a calculation with no external input, simply by applying general rules.

In the past, our hexagon function calculations always had some sort of external data. For the first time, with the Steinmann conditions, we don’t need that. Every constraint, everything we do to narrow down our guess, is either a general rule or comes out of our lower-loop results. We never need detailed information from anywhere else.

This is big, because it might allow us to avoid loops altogether. Normally, each loop is an approximation, narrowed down using similar approximations from others. If we don’t need the approximations from others, though, then we might not need any approximations at all. For this particular theory, for this toy model, we might be able to actually calculate scattering amplitudes exactly, for any strength of forces and any energy. Nobody’s been able to do that for this kind of theory before.

We’re already making progress. We’ve got some test cases, simpler quantities that we can understand with no approximations. We’re starting to understand the tools we need, the pieces of our bootstrap. We’ve got a real chance, now, of doing something really fundamentally new.

So keep watching this blog, keep your eyes on arXiv: big things are coming.

A Papal Resummation

I’ve got a new paper up this week. This one is a collaboration with Ho Tat Lam, who just finished a Master’s degree at Perimeter and will be at Princeton in the fall.

A while back, I mentioned that Perimeter’s Master’s program was holding a Winter School up in the wilderness of Ontario. In between skiing and ice skating, I worked with a group of students attempting to sum up something called the Pentagon Operator Product Expansion, or POPE.

SpacePope

The (Rapidity) Space Pope, for a joke only three people will get

While we didn’t finish the job there, we made a lot of progress, and Ho Tat and I kept working on it.

This is the first time I’ve been the senior member of a collaboration, and it was an interesting experience. There’s a lot that you feel like you know perfectly well until you sit down and try to teach it. Getting things out of my head and into someone else’s is a challenge, but it’s one I’m getting better at.

The POPE is an alternate way of calculating scattering amplitudes in N=4 super Yang-Mills. Rather than going loop by loop (and approximating the forces involved as small), it’s a sum of terms that approximate the energy as small. If all of those terms could be added up, we could calculate amplitudes in this theory for any energy and any strength of force.

We can’t do that in general (yet). What we can do is bring back the loop by loop approximation, but keep the sum in energy. If we add up that sum, we can check it against the known loop by loop results, and see if our calculation is faster. Along the way, we learn a bit about how these sums add up to give us polylogarithms.

Ho Tat and I have done the first loop. Going further isn’t just a bigger calculation, there are new challenges we’ll have to face. But I think we’ve got a shot at it.

Amplitudes 2016

I’m at Amplitudes this week, in Stockholm.

IMG_20160704_225049

The land of twilight at 11pm

Last year, I wrote a post giving a tour of the field. If I had to write it again this year most of the categories would be the same, but the achievements listed would advance in loops and legs, more complicated theories and more insight.

The ambitwistor string now goes to two loops, while my collaborators and I have pushed the polylogarithm program to five loops (dedicated post on that soon!) A decent number of techniques can now be applied to QCD, including a differential equation-based method that was used to find a four loop, three particle amplitude. Others tied together different approaches, found novel structures in string theory, or linked amplitudes techniques to physics from other disciplines. The talks have been going up on YouTube pretty quickly, due to diligent work by Nordita’s tech guy, so if you’re at all interested check it out!

What Does It Mean to Know the Answer?

My sub-field isn’t big on philosophical debates. We don’t tend to get hung up on how to measure an infinite universe, or in arguing about how to interpret quantum mechanics. Instead, we develop new calculation techniques, which tends to nicely sidestep all of that.

If there’s anything we do get philosophical about, though, any question with a little bit of ambiguity, it’s this: What counts as an analytic result?

“Analytic” here is in contrast to “numerical”. If all we need is a number and we don’t care if it’s slightly off, we can use numerical methods. We have a computer use some estimation trick, repeating steps over and over again until we have approximately the right answer.

“Analytic”, then, refers to everything else. When you want an analytic result, you want something exact. Most of the time, you don’t just want a single number: you want a function, one that can give you numbers for whichever situation you’re interested in.

It might sound like there’s no ambiguity there. If it’s a function, with sines and cosines and the like, then it’s clearly analytic. If you can only get numbers out through some approximation, it’s numerical. But as the following example shows, things can get a bit more complicated.

Suppose you’re trying to calculate something, and you find the answer is some messy integral. Still, you’ve simplified the integral enough that you can do numerical integration and get some approximate numbers out. What’s more, you can express the integral as an infinite series, so that any finite number of terms will get close to the correct result. Maybe you even know a few special cases, situations where you plug specific numbers in and you do get an exact answer.

It might sound like you only know the answer numerically. As it turns out, though, this is roughly how your computer handles sines and cosines.

When your computer tries to calculate a sine or a cosine, it doesn’t have access to the exact solution all of the time. It does have some special cases, but the rest of the time it’s using an infinite series, or some other numerical trick. Type in a random sine into your calculator and it will be just as approximate as if you did a numerical integration.

So what’s the real difference?

Rather than how we get numbers out, think about what else we know. We know how to take derivatives of sines, and how to integrate them. We know how to take limits, and series expansions. And we know their relations to other functions, including how to express them in terms of other things.

If you can do that with your integral, then you’ve probably got an analytic result. If you can’t, then you don’t.

What if you have only some of the requirements, but not the others? What if you can take derivatives, but don’t know all of the identities between your functions? What if you can do series expansions, but only in some limits? What if you can do all the above, but can’t get numbers out without a supercomputer?

That’s where the ambiguity sets in.

In the end, whether or not we have the full analytic answer is a matter of degree. The closer we can get to functions that mathematicians have studied and understood, the better grasp we have of our answer and the more “analytic” it is. In practice, we end up with a very pragmatic approach to knowledge: whether we know the answer depends entirely on what we can do with it.