Tag Archives: amplitudes

The Way to a Mathematician’s Heart Is through a Pi

Want to win over a mathematician? Bake them a pi.

Of course, presentation counts. You can’t just pour a spew of digits.

1200px-pi_tau_digit_runs-svg

If you have to, at least season it with 9’s

Ideally, you’ve baked your pi at home, in a comfortable physical theory. You lay out a graph to give it structure, then wrap it in algebraic curves before baking under an integration.

(Sometimes you can skip this part. My mathematician will happily eat graphs and ignore the pi.)

At this point, if your motives are pure (or at least mixed Tate), you have your pi. To make it more interesting, be sure to pair with a well-aged Riemann zeta value. With the right preparation, you can achieve a truly cosmic pi.

whirled-pies-54

Fine, that last joke was a bit of a stretch. Hope you had a fun pi day!

Thoughts from the Winter School

There are two things I’d like to talk about this week.

First, as promised, I’ll talk about what I worked on at the PSI Winter School.

Freddy Cachazo and I study what are called scattering amplitudes. At first glance, these are probabilities that two subatomic particles scatter off each other, relevant for experiments like the Large Hadron Collider. In practice, though, they can calculate much more.

For example, let’s say you have two black holes circling each other, like the ones LIGO detected. Zoom out far enough, and you can think of each one as a particle. The two particle-black holes exchange gravitons, and those exchanges give rise to the force of gravity between them.

bhmerger_ligo_3600

In the end, it’s all just particle physics.

 

Based on that, we can use our favorite scattering amplitudes to make predictions for gravitational wave telescopes like LIGO.

There’s a bit of weirdness to this story, though, because these amplitudes don’t line up with predictions in quite the way we’re used to. The way we calculate amplitudes involves drawing diagrams, and those diagrams have loops. Normally, each “loop” makes the amplitude more quantum-mechanical. Only the diagrams with no loops (“tree diagrams”) come from classical physics alone.

(Here “classical physics” just means “not quantum”: I’m calling general relativity “classical”.)

For this problem, we only care about classical physics: LIGO isn’t sensitive enough to see quantum effects. The weird thing is, despite that, we still need loops.

(Why? This is a story I haven’t figured out how to tell in a non-technical way. The technical explanation has to do with the fact that we’re calculating a potential, not an amplitude, so there’s a Fourier transformation, and keeping track of the dimensions entails tossing around some factors of Planck’s constant. But I feel like this still isn’t quite the full story.)

So if we want to make predictions for LIGO, we want to compute amplitudes with loops. And as amplitudeologists, we should be pretty good at that.

As it turns out, plenty of other people have already had that idea, but there’s still room for improvement.

Our time with the students at the Winter School was limited, so our goal was fairly modest. We wanted to understand those other peoples’ calculations, and perhaps to think about them in a slightly cleaner way. In particular, we wanted to understand why “loops” are really necessary, and whether there was some way of understanding what the “loops” were doing in a more purely classical picture.

At this point, we feel like we’ve got the beginning of an idea of what’s going on. Time will tell whether it works out, and I’ll update you guys when we have a more presentable picture.


 

Unfortunately, physics wasn’t the only thing I was thinking about last week, which brings me to my other topic.

This blog has a fairly strong policy against talking politics. This is for several reasons. Partly, it’s because politics simply isn’t my area of expertise. Partly, it’s because talking politics tends to lead to long arguments in which nobody manages to learn anything. Despite this, I’m about to talk politics.

Last week, citizens of Iran, Iraq, Libya, Somalia, Sudan, Syria and Yemen were barred from entering the US. This included not only new visa applicants, but also those who already have visas or green cards. The latter group includes long-term residents of the US, many of whom were detained in airports and threatened with deportation when their flights arrived shortly after the ban was announced. Among those was the president of the Graduate Student Organization at my former grad school.

A federal judge has blocked parts of the order, and the Department of Homeland Security has announced that there will be case-by-case exceptions. Still, plenty of people are stuck: either abroad if they didn’t get in in time, or in the US, afraid that if they leave they won’t be able to return.

Politics isn’t in my area of expertise. But…

I travel for work pretty often. I know how terrifying and arbitrary border enforcement can be. I know how it feels to risk thousands of dollars and months of planning because some consulate or border official is having a bad day.

I also know how essential travel is to doing science. When there’s only one expert in the world who does the sort of work you need, you can’t just find a local substitute.

And so for this, I don’t need to be an expert in politics. I don’t need a detailed case about the risks of terrorism. I already know what I need to, and I know that this is cruel.

And so I stand in solidarity with the people who were trapped in airports, and those still trapped abroad and trapped in the US. You have been treated cruelly, and you shouldn’t have been. Hopefully, that sort of message can transcend politics.

 

One final thing: I’m going to be a massive hypocrite and continue to ban political comments on this blog. If you want to talk to me about any of this (and you think one or both of us might actually learn something from the exchange) please contact me in private.

Hexagon Functions Meet the Amplituhedron: Thinking Positive

I finished a new paper recently, it’s up on arXiv now.

This time, we’re collaborating with Jaroslav Trnka, of Amplituhedron fame, to investigate connections between the Amplituhedron and our hexagon function approach.

The Amplituhedron is a way to think about scattering amplitudes in our favorite toy model theory, N=4 super Yang-Mills. Specifically, it describes amplitudes as the “volume” of some geometric space.

Here’s something you might expect: if something is a volume, it should be positive, right? You can’t have a negative amount of space. So you’d naturally guess that these scattering amplitudes, if they’re really the “volume” of something, should be positive.

“Volume” is in quotation marks there for a reason, though, because the real story is a bit more complicated. The Amplituhedron isn’t literally the volume of some space, there are a bunch of other mathematical steps between the geometric story of the Amplituhedron on the one end and the final amplitude on the other. If it was literally a volume, calculating it would be quite a bit easier: mathematicians have gotten very talented at calculating volumes. But if it was literally a volume, it would have to be positive.

What our paper demonstrates is that, in the right regions (selected by the structure of the Amplituhedron), the amplitudes we’ve calculated so far are in fact positive. That first, basic requirement for the amplitude to actually literally be a volume is satisfied.

Of course, this doesn’t prove anything. There’s still a lot of work to do to actually find the thing the amplitude is the volume of, and this isn’t even proof that such a thing exists. It’s another, small piece of evidence. But it’s a reassuring one, and it’s nice to begin to link our approach with the Amplituhedron folks.

This week was the 75th birthday of John Schwarz, one of the founders of string theory and a discoverer of N=4 super Yang-Mills. We’ve dedicated the paper to him. His influence on the field, like the amplitudes of N=4 themselves, has been consistently positive.

What If the Field Is Doomed?

Around Halloween, I have a tradition of exploring the spooky and/or scary side of physics (sometimes rather tenuously). This time, I want to talk about something particle physicists find scary: the future of the field.

For a long time, now, our field has centered around particle colliders. Early colliders confirmed the existence of quarks and gluons, and populated the Standard Model with a wealth of particles, some expected and some not. Now, an enormous amount of effort has poured into the Large Hadron Collider, which found the Higgs…and so far, nothing else.

Plans are being discussed for an even larger collider, in Europe or China, but it’s not clear that either will be funded. Even if the case for new physics isn’t as strong in such a collider, there are properties of the Higgs that the LHC won’t be able to measure, things it’s important to check with a more powerful machine.

That’s the case we’ll have to make to the public, if we want such a collider to be built. But in addition to the scientific reasons, there are selfish reasons to hope for a new collider. Without one, it’s not clear the field can survive in its current form.

By “the field”, here, I don’t just mean those focused on making predictions for collider physics. My work isn’t plugged particularly tightly into the real world, the same is true of most string theorists. Naively, you’d think it wouldn’t matter to us if a new collider gets built.

The trouble is, physics is interconnected. We may not all make predictions about the world, but the purpose of the tools we build and concepts we explore is to eventually make contact. On grant applications, we talk about that future, one that leads not just to understanding the mathematics and models we use but to understanding reality. And for a long while, a major theme in those grant applications has been collider physics.

Different sub-fields are vulnerable to this in different ways. Surprisingly, the people who directly make predictions for the LHC might have it easiest. Many of them can pivot, and make predictions for cosmological observations and cheaper dark matter detection experiments. Quite a few are already doing so.

It’s harder for my field, for amplitudeology. We try to push the calculation techniques of theoretical physics to greater and greater precision…but without colliders, there are fewer experiments that can match that precision. Cosmological observations and dark matter detection won’t need four-loop calculations.

If there isn’t a next big collider, our field won’t dry up overnight. Our work is disconnected enough, at a far enough remove from reality, that it takes time for that sort of change to be reflected in our funding. Optimistically, this gives people enough time to change gears and alter their focus to the less collider-dependent parts of the field. Pessimistically, it means people would be working on a zombie field, shambling around in a field that is already dead but can’t admit it.

z-nation-field-of-zombies

Well I had to use some Halloween imagery

My hope is that this won’t happen. Even if the new colliders don’t get approved and collider physics goes dormant, I’d like to think my colleagues are adaptable enough to stay useful as the world’s demands change. But I’m young in this field, I haven’t seen it face these kinds of challenges before. And so, I worry.

Four Gravitons in China

I’m in China this week, at the School and Workshop on Amplitudes in Beijing 2016.

img_20161018_085714

It’s a little chilly this time of year, so the dragons have accessorized

A few years back, I mentioned that there didn’t seem to be many amplitudeologists in Asia. That’s changed quite a lot over just the last few years. Song He and Yu-tin Huang went from postdocs in the west to faculty positions in China and Taiwan, respectively, while Bo Feng’s group in China has expanded. As a consequence, there’s now a substantial community here. This is the third “Amplitudes in Asia” conference, with past years meeting in Hong Kong and Taipei.

The “school” part of the conference was last week. I wasn’t here, but the students here seem to have enjoyed it a lot. This week is the “workshop” part, and there have been talks on a variety of parts of amplitudes. Nima showed up on Wednesday and managed to talk for his usual impressively long amount of time, finishing with a public lecture about the future of physics. The talk was ostensibly about why China should build the next big collider, but for the most part it ended up as a more general talk about exciting open questions in high energy physics. The talks were recorded, so they should be online at some point.

Hexagon Functions IV: Steinmann Harder

It’s paper season! I’ve got another paper out this week, this one a continuation of the hexagon function story.

The story so far:

My collaborators and I have been calculating “six-particle” (two particles collide, four come out, or three collide, three come out…) scattering amplitudes (probabilities that particles scatter) in N=4 super Yang-Mills. We calculate them starting with an ansatz (a guess, basically) made up of a type of functions called hexagon functions: “hexagon” because they’re the right functions for six-particle scattering. We then narrow down our guess by bringing in other information: for example, if two particles are close to lining up, our answer needs to match the one calculated with something called the POPE, so we can throw out guesses that don’t match that. In the end, only one guess survives, and we can check that it’s the right answer.

So what’s new this time?

More loops:

In quantum field theory, most of our calculations are approximate, and we measure the precision in something called loops. The more loops, the closer we are to the exact result, and the more complicated the calculation becomes.

This time, we’re at five loops of precision. To give you an idea of how complicated that is: I store these functions in text files. We’ve got a new, more efficient notation for them. With that, the two-loop functions fit into files around 20KB. Three loops, 500KB. Four, 15MB. And five? 300MB.

So if you want to imagine five loops, think about something that needs to be stored in a 300MB text file.

More insight:

We started out having noticed some weird new symmetries of our old results, so we brought in Simon Caron-Huot, expert on weird new symmetries. He couldn’t figure out that one…but he did notice an entirely different symmetry, one that turned out to have been first noticed in the 60’s, called the Steinmann relations.

The core idea of the Steinmann relations goes back to the old method of calculating amplitudes, with Feynman diagrams. In Feynman diagrams, lines represent particles traveling from one part of the diagram to the other. In a simplified form, the Steinmann conditions are telling us that diagrams can’t take two mutually exclusive shapes at the same time. If three particles are going one way, they can’t also be going another way.

steinmann2

With the Steinmann relations, things suddenly became a whole lot easier. Calculations that we had taken months to do, Simon was now doing in a week. Finally we could narrow things down and get the full answer, and we could do it with clear, physics-based rules.

More bootstrap:

In physics, when we call something a “bootstrap” it’s in reference to the phrase “pull yourself up by your own boostraps”. That impossible task, lifting yourself  with no outside support, is essentially what we do when we “bootstrap”: we do a calculation with no external input, simply by applying general rules.

In the past, our hexagon function calculations always had some sort of external data. For the first time, with the Steinmann conditions, we don’t need that. Every constraint, everything we do to narrow down our guess, is either a general rule or comes out of our lower-loop results. We never need detailed information from anywhere else.

This is big, because it might allow us to avoid loops altogether. Normally, each loop is an approximation, narrowed down using similar approximations from others. If we don’t need the approximations from others, though, then we might not need any approximations at all. For this particular theory, for this toy model, we might be able to actually calculate scattering amplitudes exactly, for any strength of forces and any energy. Nobody’s been able to do that for this kind of theory before.

We’re already making progress. We’ve got some test cases, simpler quantities that we can understand with no approximations. We’re starting to understand the tools we need, the pieces of our bootstrap. We’ve got a real chance, now, of doing something really fundamentally new.

So keep watching this blog, keep your eyes on arXiv: big things are coming.

A Papal Resummation

I’ve got a new paper up this week. This one is a collaboration with Ho Tat Lam, who just finished a Master’s degree at Perimeter and will be at Princeton in the fall.

A while back, I mentioned that Perimeter’s Master’s program was holding a Winter School up in the wilderness of Ontario. In between skiing and ice skating, I worked with a group of students attempting to sum up something called the Pentagon Operator Product Expansion, or POPE.

SpacePope

The (Rapidity) Space Pope, for a joke only three people will get

While we didn’t finish the job there, we made a lot of progress, and Ho Tat and I kept working on it.

This is the first time I’ve been the senior member of a collaboration, and it was an interesting experience. There’s a lot that you feel like you know perfectly well until you sit down and try to teach it. Getting things out of my head and into someone else’s is a challenge, but it’s one I’m getting better at.

The POPE is an alternate way of calculating scattering amplitudes in N=4 super Yang-Mills. Rather than going loop by loop (and approximating the forces involved as small), it’s a sum of terms that approximate the energy as small. If all of those terms could be added up, we could calculate amplitudes in this theory for any energy and any strength of force.

We can’t do that in general (yet). What we can do is bring back the loop by loop approximation, but keep the sum in energy. If we add up that sum, we can check it against the known loop by loop results, and see if our calculation is faster. Along the way, we learn a bit about how these sums add up to give us polylogarithms.

Ho Tat and I have done the first loop. Going further isn’t just a bigger calculation, there are new challenges we’ll have to face. But I think we’ve got a shot at it.