Why the Coupling Constants Aren’t Constant: Epistemology and Pragmatism

If you’ve heard a bit about physics, you might have heard that each of the fundamental forces (electromagnetism, the weak nuclear force, the strong nuclear force, and gravity) has a coupling constant, a number, handed down from nature itself, that determines how strong of a force it is. Maybe you’ve seen them in a table, like this:

tablefromhyperphysics

If you’ve heard a bit more about physics, though, you’ll have heard that those coupling constants aren’t actually constant! Instead, they vary with energy. Maybe you’ve seen them plotted like this:

phypub4highen

The usual way physicists explain this is in terms of quantum effects. We talk about “virtual particles”, and explain that any time particles and forces interact, these virtual particles can pop up, adding corrections that change with the energy of the interacting particles. The coupling constant includes all of these corrections, so it can’t be constant, it has to vary with energy.

renormalized-vertex

Maybe you’re happy with this explanation. But maybe you object:

“Isn’t there still a constant, though? If you ignore all the virtual particles, and drop all the corrections, isn’t there some constant number you’re correcting? Some sort of `bare coupling constant’ you could put into a nice table for me?”

There are two reasons I can’t do that. One is an epistemological reason, that comes from what we can and cannot know. The other is practical: even if I knew the bare coupling, most of the time I wouldn’t want to use it.

Let’s start with the epistemology:

The first thing to understand is that we can’t measure the bare coupling directly. When we measure the strength of forces, we’re always measuring the result of quantum corrections. We can’t “turn off” the virtual particles.

You could imagine measuring it indirectly, though. You’d measure the end result of all the corrections, then go back and calculate. That calculation would tell you how big the corrections were supposed to be, and you could subtract them off, solve the equation, and find the bare coupling.

And this would be a totally reasonable thing to do, except that when you go and try to calculate the quantum corrections, instead of something sensible, you get infinity.

We think that “infinity” is due to our ignorance: we know some of the quantum corrections, but not all of them, because we don’t have a final theory of nature. In order to calculate anything we need to hedge around that ignorance, with a trick called renormalization. I talk about that more in an older post. The key message to take away there is that in order to calculate anything we need to give up the hope of measuring certain bare constants, even “indirectly”. Once we fix a few constants that way, the rest of the theory gives reliable predictions.

So we can’t measure bare constants, and we can’t reason our way to them. We have to find the full coupling, with all the quantum corrections, and use that as our coupling constant.

Still, you might wonder, why does the coupling constant have to vary? Can’t I just pick one measurement, at one energy, and call that the constant?

This is where pragmatism comes in. You could fix your constant at some arbitrary energy, sure. But you’ll regret it.

In particle physics, we usually calculate in something called perturbation theory. Instead of calculating something exactly, we have to use approximations. We add up the approximations, order by order, expecting that each time the corrections will get smaller and smaller, so we get closer and closer to the truth.

And this works reasonably well if your coupling constant is small enough, provided it’s at the right energy.

If your coupling constant is at the wrong energy, then your quantum corrections will notice the difference. They won’t just be small numbers anymore. Instead, they end up containing logarithms of the ratio of energies. The more difference between your arbitrary energy scale and the correct one, the bigger these logarithms get.

This doesn’t make your calculation wrong, exactly. It makes your error estimate wrong. It means that your assumption that the next order is “small enough” isn’t actually true. You’d need to go to higher and higher orders to get a “good enough” answer, if you can get there at all.

Because of that, you don’t want to think about the coupling constants as actually constant. If we knew the final theory then maybe we’d know the true numbers, the ultimate bare coupling constants. But we still would want to use coupling constants that vary with energy for practical calculations. We’d still prefer the plot, and not just the table.

Advertisements

Amplitudes in the LHC Era at GGI

I’m at the Galileo Galilei Institute in Florence this week, for a program on Amplitudes in the LHC Era.

IMG_20181102_091428198_HDR

I didn’t notice this ceiling decoration last time I was here. These guys really love their Galileo stuff.

I’ll be here for three weeks of the full six-week program, hopefully plenty of time for some solid collaboration. This week was the “conference part”, with a flurry of talks over three days.

I missed the first day, which focused on the “actually useful” side of scattering amplitudes, practical techniques that can be applied to real Standard Model calculations. Luckily the slides are online, and at least some of the speakers are still around to answer questions. I’m particularly curious about Daniel Hulme’s talk, about an approximation strategy I hadn’t heard of before.

The topics of the next two days were more familiar, but the talks still gave me a better appreciation for the big picture behind them. From Johannes Henn’s thoughts about isolating a “conformal part” of general scattering amplitudes to Enrico Herrmann’s roadmap for finding an amplituhedron for supergravity, people seem to be aiming for bigger goals than just the next technical hurdle. It will be nice to settle in over the next couple weeks and get a feeling for what folks are working on next.

Cosmology, or Cosmic Horror?

Around Halloween, I have a tradition of posting about the “spooky” side of physics. This year, I’ll be comparing two no doubt often confused topics, Cosmic Horror and Cosmology.

cthulhu_and_r27lyeh

Pro tip: if this guy shows up, it’s probably Cosmic Horror

Cosmic Horror

Cosmology

Started in the 1920’s with the work of Howard Phillips Lovecraft Started in the 1920’s with the work of Alexander Friedmann
Unimaginably ancient universe Precisely imagined ancient universe
In strange ages even death may die Strange ages, what redshift is that?
An expedition to Antarctica uncovers ruins of a terrifying alien civilization An expedition to Antarctica uncovers…actually, never mind, just dust
Alien beings may propagate in hidden dimensions Gravitons may propagate in hidden dimensions
Cultists compete to be last to be eaten by the Elder Gods Grad students compete to be last to realize there are no jobs
Oceanic “deep ones” breed with humans Have you seen daycare costs in a university town? No way.
Variety of inventive and bizarre creatures, inspiring libraries worth of copycat works Fritz Zwicky
Hollywood adaptations are increasingly popular, not very faithful to source material Actually this is exactly the same
Can waste hours on an ultimately fruitless game of Arkham Horror Can waste hours on an ultimately fruitless argument with Paul Steinhardt
No matter what we do, eventually Azathoth will kill us all No matter what we do, eventually vacuum decay will kill us all

A Micrographia of Beastly Feynman Diagrams

Earlier this year, I had a paper about the weird multi-dimensional curves you get when you try to compute trickier and trickier Feynman diagrams. These curves were “Calabi-Yau”, a type of curve string theorists have studied as a way to curl up extra dimensions to preserve something called supersymmetry. At the time, string theorists asked me why Calabi-Yau curves showed up in these Feynman diagrams. Do they also have something to do with supersymmetry?

I still don’t know the general answer. I don’t know if all Feynman diagrams have Calabi-Yau curves hidden in them, or if only some do. But for a specific class of diagrams, I now know the reason. In this week’s paper, with Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, we prove it.

We just needed to look at some more exotic beasts to figure it out.

tardigrade_eyeofscience_960

Like this guy!

Meet the tardigrade. In biology, they’re incredibly tenacious microscopic animals, able to withstand the most extreme of temperatures and the radiation of outer space. In physics, we’re using their name for a class of Feynman diagrams.

even_loop_tardigrades

A clear resemblance!

There is a long history of physicists using whimsical animal names for Feynman diagrams, from the penguin to the seagull (no relation). We chose to stick with microscopic organisms: in addition to the tardigrades, we have paramecia and amoebas, even a rogue coccolithophore.

The diagrams we look at have one thing in common, which is key to our proof: the number of lines on the inside of the diagram (“propagators”, which represent “virtual particles”) is related to the number of “loops” in the diagram, as well as the dimension. When these three numbers are related in the right way, it becomes relatively simple to show that any curves we find when computing the Feynman diagram have to be Calabi-Yau.

This includes the most well-known case of Calabi-Yaus showing up in Feynman diagrams, in so-called “banana” or “sunrise” graphs. It’s closely related to some of the cases examined by mathematicians, and our argument ended up pretty close to one made back in 2009 by the mathematician Francis Brown for a different class of diagrams. Oddly enough, neither argument works for the “traintrack” diagrams from our last paper. The tardigrades, paramecia, and amoebas are “more beastly” than those traintracks: their Calabi-Yau curves have more dimensions. In fact, we can show they have the most dimensions possible at each loop, provided all of our particles are massless. In some sense, tardigrades are “as beastly as you can get”.

We still don’t know whether all Feynman diagrams have Calabi-Yau curves, or just these. We’re not even sure how much it matters: it could be that the Calabi-Yau property is a red herring here, noticed because it’s interesting to string theorists but not so informative for us. We don’t understand Calabi-Yaus all that well yet ourselves, so we’ve been looking around at textbooks to try to figure out what people know. One of those textbooks was our inspiration for the “bestiary” in our title, an author whose whimsy we heartily approve of.

Like the classical bestiary, we hope that ours conveys a wholesome moral. There are much stranger beasts in the world of Feynman diagrams than anyone suspected.

The Amplitudes Assembly Line

In the amplitudes field, we calculate probabilities for particles to interact.

We’re trying to improve on the old-school way of doing this, a kind of standard assembly line. First, you define your theory, writing down something called a Lagrangian. Then you start drawing Feynman diagrams, starting with the simplest “tree” diagrams and moving on to more complicated “loops”. Using rules derived from your Lagrangian, you translate these Feynman diagrams into a set of integrals. Do the integrals, and finally you have your answer.

Our field is a big tent, with many different approaches. Despite that, a kind of standard picture has emerged. It’s not the best we can do, and it’s certainly not what everyone is doing. But it’s in the back of our minds, a default to compare against and improve on. It’s the amplitudes assembly line: an “industrial” process that takes raw assumptions and builds particle physics probabilities.

amplitudesassembly

  1. Start with some simple assumptions about your particles (what mass do they have? what is their spin?) and your theory (minimally, it should obey special relativity). Using that, find the simplest “trees”, involving only three particles: one particle splitting into two, or two particles merging into one.
  2. With the three-particle trees, you can now build up trees with any number of particles, using a technique called BCFW (named after its inventors, Ruth Britto, Freddy Cachazo, Bo Feng, and Edward Witten).
  3. Now that you’ve got trees with any number of particles, it’s time to get loops! As it turns out, you can stitch together your trees into loops, using a technique called generalized unitarity. To do this, you have to know what kinds of integrals are allowed to show up in your result, and a fair amount of effort in the field goes into figuring out a better “basis” of integrals.
  4. (Optional) Generalized unitarity will tell you which integrals you need to do, but those integrals may be related to each other. By understanding where these relations come from, you can reduce to a basis of fewer “master” integrals. You can also try to aim for integrals with particular special properties, quite a lot of effort goes in to improving this basis as well. The end goal is to make the final step as easy as possible:
  5. Do the integrals! If you just want to get a number out, you can use numerical methods. Otherwise, there’s a wide variety of choices available. Methods that use differential equations are probably the most popular right now, but I’m a fan of other options.

Some people work to improve one step in this process, making it as efficient as possible. Others skip one step, or all of them, replacing them with deeper ideas. Either way, the amplitudes assembly line is the background: our current industrial machine, churning out predictions.

Congratulations to Arthur Ashkin, Gérard Mourou, and Donna Strickland!

The 2018 Physics Nobel Prize was announced this week, awarded to Arthur Ashkin, Gérard Mourou, and Donna Strickland for their work in laser physics.

nobel2018Some Nobel prizes recognize discoveries of the fundamental nature of reality. Others recognize the tools that make those discoveries possible.

Ashkin developed techniques that use lasers to hold small objects in place, culminating in “optical tweezers” that can pick up and move individual bacteria. Mourou and Strickland developed chirped pulse amplification, the current state of the art in extremely high-power lasers. Strickland is only the third woman to win the Nobel prize in physics, Ashkin at 96 is the oldest person to ever win the prize.

(As an aside, the phrase “optical tweezers” probably has you imagining two beams of laser light pinching a bacterium between them, like microscopic lightsabers. In fact, optical tweezers use a single beam, focused and bent so that if an object falls out of place it will gently roll back to the middle of the beam. Instead of tweezers, it’s really more like a tiny laser spoon.)

The Nobel announcement emphasizes practical applications, like eye surgery. It’s important to remember that these are research tools as well. I wouldn’t have recognized the names of Ashkin, Mourou, and Strickland, but I recognized atom trapping, optical tweezers, and ultrashort pulses. Hang around atomic physicists, or quantum computing experiments, and these words pop up again and again. These are essential tools that have given rise to whole subfields. LIGO won a Nobel based on the expectation that it would kick-start a vast new area of research. Ashkin, Mourou, and Strickland’s work already has.

When You Shouldn’t Listen to a Distinguished but Elderly Scientist

Of science fiction author Arthur C. Clarke’s sayings, the most famous is “Clarke’s third law”, that “Any sufficiently advanced technology is indistinguishable from magic.” Almost as famous, though, is his first law:

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

Recently Michael Atiyah, an extremely distinguished but also rather elderly mathematician, claimed that something was possible: specifically, he claimed it was possible that he had proved the Riemann hypothesis, one of the longest-standing and most difficult puzzles in mathematics. I won’t go into the details here, but people are, well, skeptical.

This post isn’t really about Atiyah. I’m not close enough to that situation to comment. Instead, it’s about a more general problem.

See, the public seems to mostly agree with Clarke’s law. They trust distinguished, elderly scientists, at least when they’re saying something optimistic. Other scientists know better. We know that scientists are human, that humans age…and that sometimes scientific minds don’t age gracefully.

Some of the time, that means Alzheimer’s, or another form of dementia. Other times, it’s nothing so extreme, just a mind slowing down with age, opinions calcifying and logic getting just a bit more fuzzy.

And the thing is, watching from the sidelines, you aren’t going to know the details. Other scientists in the field will, but this kind of thing is almost never discussed with the wider public. Even here, though specific physicists come to mind as I write this, I’m not going to name them. It feels rude, to point out that kind of all-too-human weakness in someone who accomplished so much. But I think it’s important for the public to keep in mind that these people exist. When an elderly Nobelist claims to have solved a problem that baffles mainstream science, the news won’t tell you they’re mentally ill. All you can do is keep your eyes open, and watch for warning signs:

Be wary of scientists who isolate themselves. Scientists who still actively collaborate and mentor almost never have this kind of problem. There’s a nasty feedback loop when those contacts start to diminish. Being regularly challenged is crucial to test scientific ideas, but it’s also important for mental health, especially in the elderly. As a scientist thinks less clearly, they won’t be able to keep up with their collaborators as much, worsening the situation.

Similarly, beware those famous enough to surround themselves with yes-men. With Nobel prizewinners in particular, many of the worst cases involve someone treated with so much reverence that they forget to question their own ideas. This is especially risky when commenting on an unfamiliar field: often, the Nobelist’s contacts in the new field have a vested interest in holding on to their big-name support, and ignoring signs of mental illness.

Finally, as always, bigger claims require better evidence. If everything someone works on is supposed to revolutionize science as we know it, then likely none of it will. The signs that indicate crackpots apply here as well: heavily invoking historical scientists, emphasis on notation over content, a lack of engagement with the existing literature. Be especially wary if the argument seems easy, deep problems are rarely so simple to solve.

Keep this in mind, and the next time a distinguished but elderly scientist states that something is possible, don’t trust them blindly. Ultimately, we’re still humans beings. We don’t last forever.