A Micrographia of Beastly Feynman Diagrams

Earlier this year, I had a paper about the weird multi-dimensional curves you get when you try to compute trickier and trickier Feynman diagrams. These curves were “Calabi-Yau”, a type of curve string theorists have studied as a way to curl up extra dimensions to preserve something called supersymmetry. At the time, string theorists asked me why Calabi-Yau curves showed up in these Feynman diagrams. Do they also have something to do with supersymmetry?

I still don’t know the general answer. I don’t know if all Feynman diagrams have Calabi-Yau curves hidden in them, or if only some do. But for a specific class of diagrams, I now know the reason. In this week’s paper, with Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, we prove it.

We just needed to look at some more exotic beasts to figure it out.

tardigrade_eyeofscience_960

Like this guy!

Meet the tardigrade. In biology, they’re incredibly tenacious microscopic animals, able to withstand the most extreme of temperatures and the radiation of outer space. In physics, we’re using their name for a class of Feynman diagrams.

even_loop_tardigrades

A clear resemblance!

There is a long history of physicists using whimsical animal names for Feynman diagrams, from the penguin to the seagull (no relation). We chose to stick with microscopic organisms: in addition to the tardigrades, we have paramecia and amoebas, even a rogue coccolithophore.

The diagrams we look at have one thing in common, which is key to our proof: the number of lines on the inside of the diagram (“propagators”, which represent “virtual particles”) is related to the number of “loops” in the diagram, as well as the dimension. When these three numbers are related in the right way, it becomes relatively simple to show that any curves we find when computing the Feynman diagram have to be Calabi-Yau.

This includes the most well-known case of Calabi-Yaus showing up in Feynman diagrams, in so-called “banana” or “sunrise” graphs. It’s closely related to some of the cases examined by mathematicians, and our argument ended up pretty close to one made back in 2009 by the mathematician Francis Brown for a different class of diagrams. Oddly enough, neither argument works for the “traintrack” diagrams from our last paper. The tardigrades, paramecia, and amoebas are “more beastly” than those traintracks: their Calabi-Yau curves have more dimensions. In fact, we can show they have the most dimensions possible at each loop, provided all of our particles are massless. In some sense, tardigrades are “as beastly as you can get”.

We still don’t know whether all Feynman diagrams have Calabi-Yau curves, or just these. We’re not even sure how much it matters: it could be that the Calabi-Yau property is a red herring here, noticed because it’s interesting to string theorists but not so informative for us. We don’t understand Calabi-Yaus all that well yet ourselves, so we’ve been looking around at textbooks to try to figure out what people know. One of those textbooks was our inspiration for the “bestiary” in our title, an author whose whimsy we heartily approve of.

Like the classical bestiary, we hope that ours conveys a wholesome moral. There are much stranger beasts in the world of Feynman diagrams than anyone suspected.

Advertisements

The Amplitudes Assembly Line

In the amplitudes field, we calculate probabilities for particles to interact.

We’re trying to improve on the old-school way of doing this, a kind of standard assembly line. First, you define your theory, writing down something called a Lagrangian. Then you start drawing Feynman diagrams, starting with the simplest “tree” diagrams and moving on to more complicated “loops”. Using rules derived from your Lagrangian, you translate these Feynman diagrams into a set of integrals. Do the integrals, and finally you have your answer.

Our field is a big tent, with many different approaches. Despite that, a kind of standard picture has emerged. It’s not the best we can do, and it’s certainly not what everyone is doing. But it’s in the back of our minds, a default to compare against and improve on. It’s the amplitudes assembly line: an “industrial” process that takes raw assumptions and builds particle physics probabilities.

amplitudesassembly

  1. Start with some simple assumptions about your particles (what mass do they have? what is their spin?) and your theory (minimally, it should obey special relativity). Using that, find the simplest “trees”, involving only three particles: one particle splitting into two, or two particles merging into one.
  2. With the three-particle trees, you can now build up trees with any number of particles, using a technique called BCFW (named after its inventors, Ruth Britto, Freddy Cachazo, Bo Feng, and Edward Witten).
  3. Now that you’ve got trees with any number of particles, it’s time to get loops! As it turns out, you can stitch together your trees into loops, using a technique called generalized unitarity. To do this, you have to know what kinds of integrals are allowed to show up in your result, and a fair amount of effort in the field goes into figuring out a better “basis” of integrals.
  4. (Optional) Generalized unitarity will tell you which integrals you need to do, but those integrals may be related to each other. By understanding where these relations come from, you can reduce to a basis of fewer “master” integrals. You can also try to aim for integrals with particular special properties, quite a lot of effort goes in to improving this basis as well. The end goal is to make the final step as easy as possible:
  5. Do the integrals! If you just want to get a number out, you can use numerical methods. Otherwise, there’s a wide variety of choices available. Methods that use differential equations are probably the most popular right now, but I’m a fan of other options.

Some people work to improve one step in this process, making it as efficient as possible. Others skip one step, or all of them, replacing them with deeper ideas. Either way, the amplitudes assembly line is the background: our current industrial machine, churning out predictions.

Congratulations to Arthur Ashkin, Gérard Mourou, and Donna Strickland!

The 2018 Physics Nobel Prize was announced this week, awarded to Arthur Ashkin, Gérard Mourou, and Donna Strickland for their work in laser physics.

nobel2018Some Nobel prizes recognize discoveries of the fundamental nature of reality. Others recognize the tools that make those discoveries possible.

Ashkin developed techniques that use lasers to hold small objects in place, culminating in “optical tweezers” that can pick up and move individual bacteria. Mourou and Strickland developed chirped pulse amplification, the current state of the art in extremely high-power lasers. Strickland is only the third woman to win the Nobel prize in physics, Ashkin at 96 is the oldest person to ever win the prize.

(As an aside, the phrase “optical tweezers” probably has you imagining two beams of laser light pinching a bacterium between them, like microscopic lightsabers. In fact, optical tweezers use a single beam, focused and bent so that if an object falls out of place it will gently roll back to the middle of the beam. Instead of tweezers, it’s really more like a tiny laser spoon.)

The Nobel announcement emphasizes practical applications, like eye surgery. It’s important to remember that these are research tools as well. I wouldn’t have recognized the names of Ashkin, Mourou, and Strickland, but I recognized atom trapping, optical tweezers, and ultrashort pulses. Hang around atomic physicists, or quantum computing experiments, and these words pop up again and again. These are essential tools that have given rise to whole subfields. LIGO won a Nobel based on the expectation that it would kick-start a vast new area of research. Ashkin, Mourou, and Strickland’s work already has.

When You Shouldn’t Listen to a Distinguished but Elderly Scientist

Of science fiction author Arthur C. Clarke’s sayings, the most famous is “Clarke’s third law”, that “Any sufficiently advanced technology is indistinguishable from magic.” Almost as famous, though, is his first law:

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

Recently Michael Atiyah, an extremely distinguished but also rather elderly mathematician, claimed that something was possible: specifically, he claimed it was possible that he had proved the Riemann hypothesis, one of the longest-standing and most difficult puzzles in mathematics. I won’t go into the details here, but people are, well, skeptical.

This post isn’t really about Atiyah. I’m not close enough to that situation to comment. Instead, it’s about a more general problem.

See, the public seems to mostly agree with Clarke’s law. They trust distinguished, elderly scientists, at least when they’re saying something optimistic. Other scientists know better. We know that scientists are human, that humans age…and that sometimes scientific minds don’t age gracefully.

Some of the time, that means Alzheimer’s, or another form of dementia. Other times, it’s nothing so extreme, just a mind slowing down with age, opinions calcifying and logic getting just a bit more fuzzy.

And the thing is, watching from the sidelines, you aren’t going to know the details. Other scientists in the field will, but this kind of thing is almost never discussed with the wider public. Even here, though specific physicists come to mind as I write this, I’m not going to name them. It feels rude, to point out that kind of all-too-human weakness in someone who accomplished so much. But I think it’s important for the public to keep in mind that these people exist. When an elderly Nobelist claims to have solved a problem that baffles mainstream science, the news won’t tell you they’re mentally ill. All you can do is keep your eyes open, and watch for warning signs:

Be wary of scientists who isolate themselves. Scientists who still actively collaborate and mentor almost never have this kind of problem. There’s a nasty feedback loop when those contacts start to diminish. Being regularly challenged is crucial to test scientific ideas, but it’s also important for mental health, especially in the elderly. As a scientist thinks less clearly, they won’t be able to keep up with their collaborators as much, worsening the situation.

Similarly, beware those famous enough to surround themselves with yes-men. With Nobel prizewinners in particular, many of the worst cases involve someone treated with so much reverence that they forget to question their own ideas. This is especially risky when commenting on an unfamiliar field: often, the Nobelist’s contacts in the new field have a vested interest in holding on to their big-name support, and ignoring signs of mental illness.

Finally, as always, bigger claims require better evidence. If everything someone works on is supposed to revolutionize science as we know it, then likely none of it will. The signs that indicate crackpots apply here as well: heavily invoking historical scientists, emphasis on notation over content, a lack of engagement with the existing literature. Be especially wary if the argument seems easy, deep problems are rarely so simple to solve.

Keep this in mind, and the next time a distinguished but elderly scientist states that something is possible, don’t trust them blindly. Ultimately, we’re still humans beings. We don’t last forever.

Don’t Marry Your Arbitrary

This fall, I’m TAing a course on General Relativity. I haven’t taught in a while, so it’s been a good opportunity to reconnect with how students think.

This week, one problem left several students confused. The problem involved Christoffel symbols, the bane of many a physics grad student, but the trick that they had to use was in the end quite simple. It’s an example of a broader trick, a way of thinking about problems that comes up all across physics.

To see a simplified version of the problem, imagine you start with this sum:

g(j)=\Sigma_{i=0}^n ( f(i,j)-f(j,i) )

Now, imagine you want to sum the function g(j) over j. You can write:

\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n ( f(i,j)-f(j,i) )

Let’s break this up into two terms, for later convenience:

\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n f(i,j) - \Sigma_{j=0}^n \Sigma_{i=0}^n f(j,i)

Without telling you anything about f(i,j), what do you know about this sum?

Well, one thing you know is that i and j are arbitrary.

i and j are letters you happened to use. You could have used different letters, x and y, or \alpha and \beta. You could even use different letters in each term, if you wanted to. You could even just pick one term, and swap i and j.

\Sigma_{j=0}^n g(j) = \Sigma_{j=0}^n \Sigma_{i=0}^n f(i,j) - \Sigma_{i=0}^n \Sigma_{j=0}^n f(i,j) = 0

And now, without knowing anything about f(i,j), you know that \Sigma_{j=0}^n g(j) is zero.

In physics, it’s extremely important to keep track of what could be really physical, and what is merely your arbitrary choice. In general relativity, your choice of polar versus spherical coordinates shouldn’t affect your calculation. In quantum field theory, your choice of gauge shouldn’t matter, and neither should your scheme for regularizing divergences.

Ideally, you’d do your calculation without making any of those arbitrary choices: no coordinates, no choice of gauge, no regularization scheme. In practice, sometimes you can do this, sometimes you can’t. When you can’t, you need to keep that arbitrariness in the back of your mind, and not get stuck assuming your choice was the only one. If you’re careful with arbitrariness, it can be one of the most powerful tools in physics. If you’re not, you can stare at a mess of Christoffel symbols for hours, and nobody wants that.

Underdetermination of Theory by Metaphor

Sometimes I explain science in unconventional ways. I’ll talk about quantum mechanics without ever using the word “measurement”, or write the action of the Standard Model in legos.

Whenever I do this, someone asks me why. Why use a weird, unfamiliar explanation? Why not just stick to the tried and true, metaphors that have been tested and honed in generations of popular science books?

It’s not that I have a problem with the popular explanations, most of the time. It’s that, even when the popular explanation does a fine job, there can be good reason to invent a new metaphor. To demonstrate my point, here’s a new metaphor to explain why:

In science, we sometimes talk about underdetermination of a theory by the data. We want to find a theory whose math matches the experimental results, but sometimes the experiments just don’t tell us enough. If multiple theories match the data, we say that the theory is underdetermined, and we go looking for more data to resolve the problem.

What if you’re not a scientist, though? Often, that means you hear about theories secondhand, from some science popularizer. You’re not hearing the full math of the theory, you’re not seeing the data. You’re hearing metaphors and putting together your own picture of the theory. Metaphors are your data, in some sense. And just as scientists can find their theories underdetermined by the experimental data, you can find them underdetermined by the metaphors.

This can happen if a metaphor is consistent with two very different interpretations. If you hear that time runs faster in lower gravity, maybe you picture space and time as curved…or maybe you think low gravity makes you skip ahead, so you end up in the “wrong timeline”. Even if the popularizer you heard it from was perfectly careful, you base your understanding of the theory on the metaphor, and you can end up with the wrong understanding.

In science, the only way out of underdetermination of a theory is new, independent data. In science popularization, it’s new, independent metaphors. New metaphors shake you out of your comfort zone. If you misunderstood the old metaphor, now you’ll try to fit that misunderstanding with the new metaphor too. Often, that won’t work: different metaphors lead to different misunderstandings. With enough different metaphors, your picture of the theory won’t be underdetermined anymore: there will be only one picture, one understanding, that’s consistent with every metaphor.

That’s why I experiment with metaphors, why I try new, weird explanations. I want to wake you up, to make sure you aren’t sticking to the wrong understanding. I want to give you more data to determine your theory.

Elliptic Integrals in Ascona

I’m at a conference this week, Elliptic Integrals in Mathematics and Physics, in Ascona, Switzerland.

IMG_20180905_094100507_HDR

Perhaps the only place where the view rivals Les Houches

Elliptic integrals are the next frontier after polylogarithms, more complicated functions that can come out of Feynman diagrams starting at two loops. The community of physicists studying them is still quite small, and a large fraction of them are here at this conference. We’re at the historic Monte Verità conference center, and we’re not even a big enough group to use their full auditorium.

There has been an impressive amount of progress in understanding these integrals, even just in the last year. Watching the talks, it’s undeniable that our current understanding is powerful, broad…and incomplete. In many ways the mysteries of the field are clearing up beautifully, with many once confusingly disparate perspectives linked together. On the other hand, it feels like we’re still working with the wrong picture, and I suspect there’s still a major paradigm shift in the future. All in all, the perfect time to be working on elliptics!